From tanuki at gmail.com Sat Oct 1 00:08:37 2016 From: tanuki at gmail.com (Theodore Lief Gannon) Date: Fri, 30 Sep 2016 17:08:37 -0700 Subject: [Haskell-cafe] Batteries included (Was: GHC is a monopoly compiler) In-Reply-To: <2a0307cf-245e-8eaa-532b-702d1c540886@informatik.uni-tuebingen.de> References: <3AB97729-78B8-41A2-AAEC-2B8E6EC02793@aatal-apotheke.de> <2a0307cf-245e-8eaa-532b-702d1c540886@informatik.uni-tuebingen.de> Message-ID: There's only one top-level installation involved (stack), and no conditional branches on the process, so I'd say it's hit "trivial with step-by-step instructions" at least. I just failed to write them. ;) I'll have to check whether I have global GTK. I'm 95% sure I don't, but on the other hand it's been a couple of years since my last full reinstall so I could well have just forgotten about it. On Fri, Sep 30, 2016 at 3:35 PM, Tillmann Rendel < rendel at informatik.uni-tuebingen.de> wrote: > Hi, > > Theodore Lief Gannon wrote: > >> D'oh, pkg-config of course. And I took an initial 'pacman -Syu' for >> granted but I suppose that's not documented anywhere specific to >> Stack... probably worth doing. >> > > So installing gtk is trivial ... > > ... assuming you know how to operate pacman and setup pkg-config in a > mingw environment? Almost there, almost. ;) > > Interesting that you had to invoke through stack exec, tho... do you >> have dynamic linking in your global config? AFAIK static is default on >> Windows, so the DLLs don't matter after linking. >> > > I didn't change any global config options related to linking. > > Note that the issue is with the gtk DLLs, not ghc-produced DLLs. I guess > gtk is always dynamically linked, and you didn't run into this when you > tested because you have GTK installed system-wide, too. > > Tillmann > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rendel at informatik.uni-tuebingen.de Sat Oct 1 01:05:23 2016 From: rendel at informatik.uni-tuebingen.de (Tillmann Rendel) Date: Sat, 1 Oct 2016 03:05:23 +0200 Subject: [Haskell-cafe] Batteries included (Was: GHC is a monopoly compiler) In-Reply-To: References: <3AB97729-78B8-41A2-AAEC-2B8E6EC02793@aatal-apotheke.de> <2a0307cf-245e-8eaa-532b-702d1c540886@informatik.uni-tuebingen.de> Message-ID: Hi, Theodore Lief Gannon wrote: > There's only one top-level installation involved (stack), and no > conditional branches on the process, so I'd say it's hit "trivial with > step-by-step instructions" at least. I just failed to write them. ;) Ok, good point. Hopefully my experiments can lead to a more complete instruction being put somewhere. In my experience, many programming beginners on Windows cannot use the command line at all, so the whole situation of using stack is new to them. Also, the default support for copy-and-paste for the Windows command line is so bad that beginners will probably try to follow the step-by-step instructions by typing out the commands letter by letter. Therefore, a long list of simple commands but somewhat cryptic commands is still not really "trivial". So I think my question is: Could stack be persuaded somehow to make `stack install gtk3` "just work" by doing all the necessary incantations? I'm aware why `cabal install gtk3` can neither install gtk2hs-buildtools nor install the C library, but maybe stack could make a different tradeoff there. Tillmann From tanuki at gmail.com Sat Oct 1 02:51:41 2016 From: tanuki at gmail.com (Theodore Lief Gannon) Date: Fri, 30 Sep 2016 19:51:41 -0700 Subject: [Haskell-cafe] Batteries included (Was: GHC is a monopoly compiler) In-Reply-To: References: <3AB97729-78B8-41A2-AAEC-2B8E6EC02793@aatal-apotheke.de> <2a0307cf-245e-8eaa-532b-702d1c540886@informatik.uni-tuebingen.de> Message-ID: On Fri, Sep 30, 2016 at 6:05 PM, Tillmann Rendel < rendel at informatik.uni-tuebingen.de> wrote: > So I think my question is: Could stack be persuaded somehow to make `stack > install gtk3` "just work" by doing all the necessary incantations? I'm > aware why `cabal install gtk3` can neither install gtk2hs-buildtools nor > install the C library, but maybe stack could make a different tradeoff > there. I've actually put direct thought into this. I'm partially responsible for the relative ease on Windows now -- previously Stack wasn't setting up the environment correctly for MinGW -- it was providing an MSYS environment instead, which means full POSIX emulation rather than just a mostly-sufficient translation layer. This distinction is also why there are separate 'pkg-config' and 'mingw-w64-x86_64-pkg-config' (and unfortunately you want the latter, here). I almost added the system update and pkg-config (since .cabal files directly reference it) to the msys installation process in my PR, but I was dissuaded from it by some comments about issues they had trying to do the same with git: 1. it can fail due to network issues, and getting a consistent state with good user feedback out of a return code inside a sub-shell is more work than anyone's wanted to do yet. 2. the fact that msys includes an arbitrary set of packages, and in fact can upgrade itself without stack's permission or even knowledge, is damaging to the intended promise of reproducible builds. On top of that, this is solely a Windows concern -- stack doesn't have any desire or reason to be a system package manager elsewhere. So I decided the better option is a separate windows-specific tool, which knows how to deal with stack environments (that's in public library code, so yay) and provides a convenience wrapper for pacman which, among other things, attaches the correct big ugly prefix to package names for you. I got as far as deciding that it would either be named "stacman" or "Jenga" and then put it on the shelf because, with the environment stuff worked out, plain stack is no longer too much of a hassle for me personally. But, I'm certain it's a plausible and not even particularly difficult project. -------------- next part -------------- An HTML attachment was scrubbed... URL: From blamario at ciktel.net Sat Oct 1 18:10:23 2016 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Sat, 1 Oct 2016 14:10:23 -0400 Subject: [Haskell-cafe] Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: CC-ing the Café on class naming... On 2016-10-01 04:07 AM, Edward Kmett wrote: > I'm somewhat weakly against these, simply because they haven't seen > broad adoption in the wild in any of the attempts to introduce them > elsewhere, and they don't quite fit the naming convention of the other > Foo1 classes in Data.Functor.Classes > > Eq1 f says more or less that Eq a => Eq (f a). > > Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is > saying forall a. (f a) is a semigroup parametrically. Both of these > constructions could be useful, but they ARE different constructions. The standard fully parametric classes like Functor and Monad have no suffix at all. It makes sense to reserve the suffix "1" for non-parametric lifting classes. Can you suggest a different naming scheme for parametric classes of a higher order? I'm also guilty of abusing the suffix "1", at least provisionally, but these are different beasts yet again: -- | Equivalent of 'Functor' for rank 2 data types class Functor1 g where fmap1 :: (forall a. p a -> q a) -> g p -> g q https://github.com/blamario/grampa/blob/master/Text/Grampa/Classes.hs What would be a proper suffix here? I guess Functor2 would make sense, for a rank-2 type? > > If folks had actually been using, say, the Plus and Alt classes from > semigroupoids or the like more or less at all pretty much anywhere, I > could maybe argue towards bringing them up towards base, but I've seen > almost zero adoption of the ideas over multiple years -- and these > represent yet _another_ point in the design space where we talk about > semigroupal and monoidal structures where f is a Functor instead. =/ > > Many points in the design space, and little demonstrated will for > adoption seems to steers me to think that the community isn't ready to > pick one and enshrine it some place central yet. > > Overall, -1. > > -Edward > > On Fri, Sep 30, 2016 at 7:25 PM, David Feuer > wrote: > > I've been playing around with the idea of writing Haskell 2010 > type classes for finite sequences and non-empty sequences, > somewhat similar to Michael Snoyman's Sequence class in > mono-traversable. These are naturally based on Monoid1 and > Semigroup1, which I think belong in base. > > class Semigroup1 f where > (<<>>) :: f a -> f a -> f a > class Semigroup1 f => Monoid1 f where > mempty1 :: f a > > Then I can write > > class (Monoid1 t, Traversable t) => Sequence t where > singleton :: a -> t a > -- and other less-critical methods > > class (Semigroup1 t, Traversable1 t) => NESequence where > singleton1 :: a -> t a > -- etc. > > I can, of course, just write my own, but I don't think I'm the > only one using such. > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From corentin.dupont at gmail.com Sat Oct 1 19:52:38 2016 From: corentin.dupont at gmail.com (Corentin Dupont) Date: Sat, 1 Oct 2016 21:52:38 +0200 Subject: [Haskell-cafe] Fwd: Alternative: you can fool many people some time, and some people many time, but... In-Reply-To: <05751b1f-649b-51ce-caca-d905ae906021@vex.net> References: <05751b1f-649b-51ce-caca-d905ae906021@vex.net> Message-ID: Yes, sorry I meant: take 3 <$> (many $ Just 1) On Fri, Sep 30, 2016 at 9:31 PM, Albert Y. C. Lai wrote: > On 2016-09-29 03:49 PM, Corentin Dupont wrote: > >> But why doesn't this terminates? >> >> take 3 $ many $ Just 1 >> > > That looks like a type error rather than non-termination. > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Sat Oct 1 21:24:11 2016 From: ekmett at gmail.com (Edward Kmett) Date: Sat, 1 Oct 2016 17:24:11 -0400 Subject: [Haskell-cafe] Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: Let's just pause and consider what is already available on hackage today for these situations: In my constraints package I have a class named `Lifting`, which provides. class Lifting p f where lifting :: p a :- p (f a) Lifting Eq, Lifting Monad, Lifting Semigroup, Lifting (MonadReader e), etc. are then able to be handled all uniformly. It is, alas, somewhat annoying to use, as you need to use `\\ lifting` with a scoped type variable signature to get the instance in scope The currrent Eq1 is a somewhat more powerful claim though, since you can supply the equality for its argument without needing functoriality in f. This is both good and bad. It means you can't just write `instance Eq1 f` and let default methods take over, but it does mean Eq1 f works in more situations if you put in the work or use generics to generate it automatically. http://hackage.haskell.org/package/constraints-0.8/docs/Data-Constraint-Lifting.html For the rank-2 situation, I also have `Forall` and `ForallF` which provides the ability to talk about the quantified form. ForallF Eq f is defined by a fancy skolem type family trick and comes with instF :: forall p f a. ForallF p f :- p (f a) This covers the rank-2 situation today pretty well, even if you have to use `\\ instF` or what have you to get the instance in scope. I don't however, have something in a "mainstream" package for that third form mentioned above, the 'Functor'-like form, but I do have classes in semgroupoids for Alt, Plus, etc. covering the particular semigroup/monoid-like cases. Finally, going very far off the beaten and well-supported path, in `hask`, I have code for talking about entailment in the category of constraints, but like the above two tricks, it requires the user to explicitly bring the instance into scope from an `Eq a |- Eq (f a)` constraint or the like, and the more general form of `|-` lifts into not just Constraint, but k -> Constraint, and combines with Lim functor to provide quantified entailment. This doesn't compromise the thinness of the category of constraints. I'd love to see compiler support for this, eliminating the need for the \\ nonsense above, but it'd be a fair bit of work! -Edward On Sat, Oct 1, 2016 at 2:10 PM, Mario Blažević wrote: > CC-ing the Café on class naming... > > On 2016-10-01 04:07 AM, Edward Kmett wrote: > >> I'm somewhat weakly against these, simply because they haven't seen >> broad adoption in the wild in any of the attempts to introduce them >> elsewhere, and they don't quite fit the naming convention of the other >> Foo1 classes in Data.Functor.Classes >> >> Eq1 f says more or less that Eq a => Eq (f a). >> >> Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is >> saying forall a. (f a) is a semigroup parametrically. Both of these >> constructions could be useful, but they ARE different constructions. >> > > The standard fully parametric classes like Functor and Monad have no > suffix at all. It makes sense to reserve the suffix "1" for non-parametric > lifting classes. Can you suggest a different naming scheme for parametric > classes of a higher order? > > I'm also guilty of abusing the suffix "1", at least provisionally, > but these are different beasts yet again: > > -- | Equivalent of 'Functor' for rank 2 data types > class Functor1 g where > fmap1 :: (forall a. p a -> q a) -> g p -> g q > > https://github.com/blamario/grampa/blob/master/Text/Grampa/Classes.hs > > What would be a proper suffix here? I guess Functor2 would make > sense, for a rank-2 type? > > > >> If folks had actually been using, say, the Plus and Alt classes from >> semigroupoids or the like more or less at all pretty much anywhere, I >> could maybe argue towards bringing them up towards base, but I've seen >> almost zero adoption of the ideas over multiple years -- and these >> represent yet _another_ point in the design space where we talk about >> semigroupal and monoidal structures where f is a Functor instead. =/ >> >> Many points in the design space, and little demonstrated will for >> adoption seems to steers me to think that the community isn't ready to >> pick one and enshrine it some place central yet. >> >> Overall, -1. >> >> -Edward >> >> On Fri, Sep 30, 2016 at 7:25 PM, David Feuer > > wrote: >> >> I've been playing around with the idea of writing Haskell 2010 >> type classes for finite sequences and non-empty sequences, >> somewhat similar to Michael Snoyman's Sequence class in >> mono-traversable. These are naturally based on Monoid1 and >> Semigroup1, which I think belong in base. >> >> class Semigroup1 f where >> (<<>>) :: f a -> f a -> f a >> class Semigroup1 f => Monoid1 f where >> mempty1 :: f a >> >> Then I can write >> >> class (Monoid1 t, Traversable t) => Sequence t where >> singleton :: a -> t a >> -- and other less-critical methods >> >> class (Semigroup1 t, Traversable1 t) => NESequence where >> singleton1 :: a -> t a >> -- etc. >> >> I can, of course, just write my own, but I don't think I'm the >> only one using such. >> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> >> >> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonymorris at gmail.com Sat Oct 1 22:27:21 2016 From: tonymorris at gmail.com (Tony Morris) Date: Sun, 2 Oct 2016 08:27:21 +1000 Subject: [Haskell-cafe] Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: <945b5671-bc33-b8a4-222c-0f2dc98045d2@gmail.com> >> If folks had actually been using, say, the Plus and Alt classes from >> semigroupoids or the like more or less at all pretty much anywhere, I >> could maybe argue towards bringing them up towards base, but I've seen >> almost zero adoption of the ideas over multiple years -- and these >> represent yet _another_ point in the design space where we talk about >> semigroupal and monoidal structures where f is a Functor instead. =/ FWIW, very rarely do I write a package without semigroups and/or semigroupoids; sometimes for "not very important" or superficial reasons, but more typically otherwise. Even something as disparate as a CASR61.345 compliant pilot logbook uses both packages heavily, and for good reason (it says/implies so in the law!). Why others chooses to forgo the advantages is beyond me. Just a data point, cheerio! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From ruben.astud at gmail.com Sat Oct 1 23:15:15 2016 From: ruben.astud at gmail.com (Ruben Astudillo) Date: Sat, 1 Oct 2016 20:15:15 -0300 Subject: [Haskell-cafe] Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: On 01/10/16 15:10, Mario Blažević wrote: > CC-ing the Café on class naming... > > On 2016-10-01 04:07 AM, Edward Kmett wrote: >> I'm somewhat weakly against these, simply because they haven't seen >> broad adoption in the wild in any of the attempts to introduce them >> elsewhere, and they don't quite fit the naming convention of the other >> Foo1 classes in Data.Functor.Classes Basically this. What is popular on hackage should be the first metric to consider when putting something in base (even if it fits well on a module in there), we ought to not bypass this. BTW, I think that Alt newtype in Data.Monoid helps with this use case but that is usually for Alternative instead of Functor (to me just really esoteric things are one and not the other), am I missing something else on this proposal? From david.feuer at gmail.com Sat Oct 1 23:17:42 2016 From: david.feuer at gmail.com (David Feuer) Date: Sat, 1 Oct 2016 19:17:42 -0400 Subject: [Haskell-cafe] Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: The difficulty and inconvenience of using Forall and the fact that it is very far from standard Haskell make it unsuitable for some purposes. I believe it can probably lead to some efficiency issues as well, since the constraint has to be instantiated manually at each type; perhaps GHC can optimize that away. It would be fantastic if the language could expand to allow such constraints natively, but for now it seems that manually writing multiple classes is often the best approach. On Oct 1, 2016 5:24 PM, "Edward Kmett" wrote: > Let's just pause and consider what is already available on hackage today > for these situations: > > > > In my constraints package I have a class named `Lifting`, which provides. > > class Lifting p f where > lifting :: p a :- p (f a) > > Lifting Eq, Lifting Monad, Lifting Semigroup, Lifting (MonadReader e), > etc. are then able to be handled all uniformly. > > It is, alas, somewhat annoying to use, as you need to use `\\ lifting` > with a scoped type variable signature to get the instance in scope > > The currrent Eq1 is a somewhat more powerful claim though, since you can > supply the equality for its argument without needing functoriality in f. > This is both good and bad. It means you can't just write `instance Eq1 f` > and let default methods take over, but it does mean Eq1 f works in more > situations if you put in the work or use generics to generate it > automatically. > > http://hackage.haskell.org/package/constraints-0.8/docs/ > Data-Constraint-Lifting.html > > > > For the rank-2 situation, I also have `Forall` and `ForallF` which > provides the ability to talk about the quantified form. > > ForallF Eq f is defined by a fancy skolem type family trick and comes with > > instF :: forall p f a. ForallF p f :- p (f a) > > This covers the rank-2 situation today pretty well, even if you have to > use `\\ instF` or what have you to get the instance in scope. > > > > I don't however, have something in a "mainstream" package for that third > form mentioned above, the 'Functor'-like form, but I do have classes in > semgroupoids for Alt, Plus, etc. covering the particular > semigroup/monoid-like cases. > > > > Finally, going very far off the beaten and well-supported path, in `hask`, > I have code for talking about entailment in the category of constraints, > but like the above two tricks, it requires the user to explicitly bring the > instance into scope from an `Eq a |- Eq (f a)` constraint or the like, and > the more general form of `|-` lifts into not just Constraint, but k -> > Constraint, and combines with Lim functor to provide quantified entailment. > This doesn't compromise the thinness of the category of constraints. I'd > love to see compiler support for this, eliminating the need for the \\ > nonsense above, but it'd be a fair bit of work! > > -Edward > > On Sat, Oct 1, 2016 at 2:10 PM, Mario Blažević > wrote: > >> CC-ing the Café on class naming... >> >> On 2016-10-01 04:07 AM, Edward Kmett wrote: >> >>> I'm somewhat weakly against these, simply because they haven't seen >>> broad adoption in the wild in any of the attempts to introduce them >>> elsewhere, and they don't quite fit the naming convention of the other >>> Foo1 classes in Data.Functor.Classes >>> >>> Eq1 f says more or less that Eq a => Eq (f a). >>> >>> Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is >>> saying forall a. (f a) is a semigroup parametrically. Both of these >>> constructions could be useful, but they ARE different constructions. >>> >> >> The standard fully parametric classes like Functor and Monad have no >> suffix at all. It makes sense to reserve the suffix "1" for non-parametric >> lifting classes. Can you suggest a different naming scheme for parametric >> classes of a higher order? >> >> I'm also guilty of abusing the suffix "1", at least provisionally, >> but these are different beasts yet again: >> >> -- | Equivalent of 'Functor' for rank 2 data types >> class Functor1 g where >> fmap1 :: (forall a. p a -> q a) -> g p -> g q >> >> https://github.com/blamario/grampa/blob/master/Text/Grampa/Classes.hs >> >> What would be a proper suffix here? I guess Functor2 would make >> sense, for a rank-2 type? >> >> >> >>> If folks had actually been using, say, the Plus and Alt classes from >>> semigroupoids or the like more or less at all pretty much anywhere, I >>> could maybe argue towards bringing them up towards base, but I've seen >>> almost zero adoption of the ideas over multiple years -- and these >>> represent yet _another_ point in the design space where we talk about >>> semigroupal and monoidal structures where f is a Functor instead. =/ >>> >>> Many points in the design space, and little demonstrated will for >>> adoption seems to steers me to think that the community isn't ready to >>> pick one and enshrine it some place central yet. >>> >>> Overall, -1. >>> >>> -Edward >>> >>> On Fri, Sep 30, 2016 at 7:25 PM, David Feuer >> > wrote: >>> >>> I've been playing around with the idea of writing Haskell 2010 >>> type classes for finite sequences and non-empty sequences, >>> somewhat similar to Michael Snoyman's Sequence class in >>> mono-traversable. These are naturally based on Monoid1 and >>> Semigroup1, which I think belong in base. >>> >>> class Semigroup1 f where >>> (<<>>) :: f a -> f a -> f a >>> class Semigroup1 f => Monoid1 f where >>> mempty1 :: f a >>> >>> Then I can write >>> >>> class (Monoid1 t, Traversable t) => Sequence t where >>> singleton :: a -> t a >>> -- and other less-critical methods >>> >>> class (Semigroup1 t, Traversable1 t) => NESequence where >>> singleton1 :: a -> t a >>> -- etc. >>> >>> I can, of course, just write my own, but I don't think I'm the >>> only one using such. >>> >>> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From branimir.maksimovic at gmail.com Sun Oct 2 00:54:36 2016 From: branimir.maksimovic at gmail.com (Branimir Maksimovic) Date: Sun, 2 Oct 2016 02:54:36 +0200 Subject: [Haskell-cafe] nbody benchmark please Message-ID: <10011a3c-1d2b-539f-260c-eb9add261a8a@gmail.com> can someone put bang pattern with parameters in nbody benchmark program of mine as it is on my machine significantly faster just because of that: run :: Int -> Ptr Planet -> IO () run 0 _ = return () run !i !p = do advance p run (i-1) p PS I lost my account and if someone has it please do this for me ;) http://benchmarksgame.alioth.debian.org/u64q/program.php?test=nbody&lang=ghc&id=2 Also author of site removed my knucleotide benchmark which was among fastest because I didn't use data.hashtable (which is slow), rather used my own (which is few lines of code) From wren at community.haskell.org Sun Oct 2 02:39:12 2016 From: wren at community.haskell.org (wren romano) Date: Sat, 1 Oct 2016 19:39:12 -0700 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: References: Message-ID: On Thu, Sep 22, 2016 at 7:47 PM, 韩冬(基础平台部) wrote: > Hi all, > > I am happy to announce binary-parsers. A ByteString parsing library built on > binary. I borrowed lots of design/tests/document from attoparsec so that i > can build its shape very quickly, thank you bos! And thanks to binary's > excellent design, the codebase is very small(<500 loc). > > From my benchmark, it’s insanely fast, it outperforms attoparsec by 10%~30% > in aeson benchmark. it’s also slightly faster than scanner(a > non-backtracking parser designed for speed) in http request benchmark. I’d > like to ask you to give it a shot if you need super fast ByteString parsing. Yay! more users of my bytestring-lexing package :) Since attoparsec's numeric parsers are dreadfully slow, can you tell how much of your speedup is due to bytestring-lexing vs how much is due to other differences vs aeson? -- Live well, ~wren From takenobu.hs at gmail.com Sun Oct 2 04:33:25 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 2 Oct 2016 13:33:25 +0900 Subject: [Haskell-cafe] simple search for new contributors Message-ID: Hi cafe, In relation to these [1][2], I pushed simple search in multiple wiki sites for new contributors. Haskell wiki search for multiple wiki sites https://takenobu-hs.github.io/haskell-wiki-search I'm glad if it would be useful. [1]: https://mail.haskell.org/pipermail/ghc-devs/2016-September/012830.html [2]: https://github.com/ghc-proposals/ghc-proposals/pull/10 Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From handongwinter at didichuxing.com Sun Oct 2 10:17:47 2016 From: handongwinter at didichuxing.com (=?gb2312?B?uqu2rCi7+bShxr3MqLK/KQ==?=) Date: Sun, 2 Oct 2016 10:17:47 +0000 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: References: , Message-ID: <1475403477332.93922@didichuxing.com> Hi wren! Yes, i noticed that attoparsec's numeric parsers are slow. I have a benchmark set to compare attoparsec and binary-parsers on different sample JSON files, it's on github: https://github.com/winterland1989/binary-parsers. I'm pretty sure bytestring-lexing helped a lot, for example, the average decoding speed improvement is around 20%, but numeric only benchmarks(integers and numbers) improved by 30% ! Parsing is just a part of JSON decoding, lots of time is spent on unescaping, .etc. So the parser's improvement is quite large IMHO. BTW, can you provide a version of lexer which doesn't check whether a Word is a digit? In binary-parsers i use something like `takeWhile isDigit` to extract the input ByteString, so there's no need to verify this in lexer again. Maybe we can have another performance improvement. Cheers! Winterland ________________________________________ From: winterkoninkje at gmail.com on behalf of wren romano Sent: Sunday, October 2, 2016 10:39 AM To: 韩冬(基础平台部) Cc: haskell-cafe at haskell.org Subject: Re: [Haskell-cafe] Announcing binary-parsers On Thu, Sep 22, 2016 at 7:47 PM, 韩冬(基础平台部) wrote: > Hi all, > > I am happy to announce binary-parsers. A ByteString parsing library built on > binary. I borrowed lots of design/tests/document from attoparsec so that i > can build its shape very quickly, thank you bos! And thanks to binary's > excellent design, the codebase is very small(<500 loc). > > From my benchmark, it’s insanely fast, it outperforms attoparsec by 10%~30% > in aeson benchmark. it’s also slightly faster than scanner(a > non-backtracking parser designed for speed) in http request benchmark. I’d > like to ask you to give it a shot if you need super fast ByteString parsing. Yay! more users of my bytestring-lexing package :) Since attoparsec's numeric parsers are dreadfully slow, can you tell how much of your speedup is due to bytestring-lexing vs how much is due to other differences vs aeson? -- Live well, ~wren From apfelmus at quantentunnel.de Sun Oct 2 15:00:25 2016 From: apfelmus at quantentunnel.de (Heinrich Apfelmus) Date: Sun, 02 Oct 2016 17:00:25 +0200 Subject: [Haskell-cafe] Batteries included (Was: GHC is a monopoly compiler) In-Reply-To: References: <3AB97729-78B8-41A2-AAEC-2B8E6EC02793@aatal-apotheke.de> <2ED3FB49-A1E2-43F5-A0D6-53DE695EBF85@gmail.com> Message-ID: Dear hyped, I have uploaded the code to https://github.com/HeinrichApfelmus/hyper-haskell It's not a release yet, so no one-click installer yet, but it should be very easy to get it running after cloning the repository (and downloading the Electron application). Let me know what you think! Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com David McBride wrote: > Consider me hyped. I could never get jupyter to work. > > On Fri, Sep 30, 2016 at 4:22 AM, Heinrich Apfelmus < > apfelmus at quantentunnel.de> wrote: > >> HyperHaskell >>> Nifty! >>> >>> - How does this compare to jupyter (ipython) with the haskell kernel? >>> >> The overall goal is obviously very similar. To me, the main differences are >> >> * HyperHaskell should be easy to install >> (e.g. only cabal and a binary download) >> >> * HyperHaskell behaves more like a desktop application, e.g. worksheets >> are loaded from and saved to the local file system. >> >> The latter point is actually the main reason why I couldn't get into >> Jupyter at all: It insisted that I manage worksheets in some kind of >> database in the browser. Ugh! (There may be other front-ends nowadays, but >> last I checked, I didn't find anything official or popular, that's why I >> decided to write my own thing.) >> >> On the flip side, HyperHaskell is specialized to Haskell -- you can't use >> it with other languages. >> >> - Is it on GitHub or somewhere? >> Not yet, it's still in the "hype" phase. ;-) Expect the following location >> >> https://github.com/HeinrichApfelmus/hyper-haskell >> >> to fill with code in a week or two. >> >> >> Best regards, >> Heinrich Apfelmus >> >> -- >> http://apfelmus.nfshost.com >> >> >> Moritz Angermann wrote: >> >>> http://apfelmus.nfshost.com/temp/hyper-haskell-sneak-peek.png >>>> >>>> It's a project that I'm currently working on, called >>>> >>>> HyperHaskell >>>> - the strongly hyped Haskell interpreter - >>>> >>>> Well, it's supposed to be strongly hyped, but currently, only few people >>>> know about it. Could you give me a hand with, uh, hyping this? I'm not good >>>> at this. >>>> >>>> >>> Nifty! >>> >>> - How does this compare to jupyter (ipython) with the haskell kernel? >>> - Is it on GitHub or somewhere? >>> >>> Cheers, >>> Moritz >>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. >>> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. >> > > > ------------------------------------------------------------------------ > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From ok at cs.otago.ac.nz Sun Oct 2 23:20:30 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Mon, 3 Oct 2016 12:20:30 +1300 Subject: [Haskell-cafe] Batteries included (Was: GHC is a monopoly compiler) In-Reply-To: <930deee3-c1e3-98e0-49ba-b750abfdc449@durchholz.org> References: <5b804206-8afc-24cd-88aa-d26cab5a2617@durchholz.org> <930deee3-c1e3-98e0-49ba-b750abfdc449@durchholz.org> Message-ID: On 30/09/16 7:17 PM, Joachim Durchholz wrote: > There is a single standard representation. [for strings in Java] > I'm not even aware of a second one, and I've been programming Java for > quite a while now > Unless you mean StringBuilder/StringBuffer (that would be three String > types then). StringBuffer is just a synchronized version of StringBuilder. However, these classes are by no means "preferred" in > practice: the vast majority of APIs demands and returns String objects. The Java *compiler* prefers StringBuilder: when you write a string concatenation expression in Java the compiler creates a StringBuilder behind the scenes. I'm counting a class as "preferred" if the compiler *has* to know about it and generates code involving it without the programmer explicitly mentioning it. > > Even then, Java has its preferred string representation nailed down > pretty strongly: a hidden array of 16-bit Unicode code points, > referenced by a descriptor object (the actual String), immutable. As already noted, that representation changed internally. And that change is actually relevant to this thread. The representation that _used_ to be used was (char[] array, offset, length, hash) Amongst other things, this meant that taking a substring cost O(1) time and O(1) space, because you just had to allocate and initialise a new "descriptor object" sharing the underlying array. Since Java 1.7 the representation is (char[] array, hash) Amongst other things, this means that taking a substring n characters long now costs O(n) time and O(n) space. If you are working in a loop like while (there is more input) { read a chunk of input split it into substrings process some of the substrings } the pre-Java-1.7 representation is perfect. If you *retain* some of the substrings, however, you retain the whole chunk. That was easy to fix by doing retain(new String(someSubstring)) instead of retain(someSubstring) but you had to *know* to do it. (Another solution would be to have a smarter garbage collector that knew about string sharing and could compact strings. I wrote such a collector for XPL many years ago. It's quite easy to do a stop-and- copy garbage collector that does that. But that's not the state of the art in Java garbage collection, and I'm not sure how well string compaction would fit into a more advanced collector.) The Java 1.7-and-later representation is *safer*. Depending on your usage, it may either save a lot of memory or bloat your memory use. The point is that there is no one-size-fits-all string representation; being given only one forces you to either write your own additional representation(s) or to use a representation which is not really suited to your particular purpose. From jo at durchholz.org Mon Oct 3 06:39:22 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Mon, 3 Oct 2016 08:39:22 +0200 Subject: [Haskell-cafe] Batteries included (Was: GHC is a monopoly compiler) In-Reply-To: References: <5b804206-8afc-24cd-88aa-d26cab5a2617@durchholz.org> <930deee3-c1e3-98e0-49ba-b750abfdc449@durchholz.org> Message-ID: <94567629-eaa8-0caf-596b-8c97fe188f96@durchholz.org> Am 03.10.2016 um 01:20 schrieb Richard A. O'Keefe: > > The Java *compiler* prefers StringBuilder: when you write a string > concatenation expression in Java the compiler creates a StringBuilder > behind the scenes. I'm counting a class as "preferred" if the > compiler *has* to know about it and generates code involving it > without the programmer explicitly mentioning it. Then Haskell's preferred representation of additive types would be the updatable record. Or machine integers are preferably stored in registers because that's where every new integer is created, RAM is second class... I think that's stretching things too far. There are more indicators against your theory: 1) During the lifetime of a program, the vast majority of textual data is stored in String objects. StringBuilders are just temporary and are discarded once the String object is built. (That's quantitative, not qualitative.) 2) The compiler does NOT have to know. Straight from the Java spec: > 15.18.1. [...] To increase the performance of repeated string > concatenation, a Java compiler may use the StringBuffer class or a > similar technique to reduce the number of intermediate String objects > that are created by evaluation of an expression. Moreover, the entire paragraph is a non-authoritative remark. >> Even then, Java has its preferred string representation nailed down >> pretty strongly: a hidden array of 16-bit Unicode code points, >> referenced by a descriptor object (the actual String), immutable. > > As already noted, that representation changed internally. Yes, Java 7 changed that to prevent memory leaks from happening. > And that change is actually relevant to this thread. I have been thinking about that argument and do not think it is valid in a Java context. Java programmers are used to unexpected performance changes, mostly due to changes in the garbage collector. It's also just a single function that changed behaviour, and definitely not the most common one even if it's pretty important. > The representation that _used_ to be used was > (char[] array, offset, length, hash) > Amongst other things, Not really... > this meant that taking a substring cost > O(1) time and O(1) space, because you just had to allocate and > initialise a new "descriptor object" sharing the underlying > array. "You" never had. This all happened behind the scenes, an implementation detail. > If you are working in a loop like > while (there is more input) { > read a chunk of input > split it into substrings > process some of the substrings > } > the pre-Java-1.7 representation is perfect. > If you *retain* some of the substrings, however, you > retain the whole chunk. That was easy to fix by > doing > retain(new String(someSubstring)) > instead of > retain(someSubstring) > but you had to *know* to do it. Okay, now i get the point. It's a pretty specialized kind of code though. Usually you don't care much about how much of some input you retain, because more than 50% of the input strings are retained anyway (if you even do retain strings). It did have the potential for a memory leak, but now we're getting into a pretty special corner case here. Plus it still does not change a bit about that String is the standard representation in Java, not StringBuffer nor byte[]. The programmer(!) isn't confused about selecting which one, and that was the point originally made. Diving into implementation details just to prove that wrong isn't going to change that the impression that Java's string representations are confusing was just the result of first impressions without actual practice. > (Another solution would be to have a smarter > garbage collector that knew about string sharing and > could compact strings. I wrote such a collector for > XPL many years ago. It's quite easy to do a stop-and- > copy garbage collector that does that. But that's not > the state of the art in Java garbage collection, Agreed. > and > I'm not sure how well string compaction would fit into > a more advanced collector.) Since Java's standard use case is long-running server programs, most if not all Java GCs are copying collectors nowadays. So, this would be a good fit in principle. It might have unfavorable trade-offs with other use cases though. It's quite possible that they implemented this, benchmarked it, and found they couldn't get it up to competitive speed. > The point is that there is no one-size-fits-all string > representation; being given only one forces you to either > write your own additional representation(s) or to use a > representation which is not really suited to your > particular purpose. I haven't read anybody complain about Java's string representation yet. That does not mean that nobody does (I'm pretty sure that there are complaints), it just doesn't concern people much in practice. Most Java programmers don't deal with this, they use a library like JAXML or Jackson for parsing (XML resp. JSON), get good-enough performance, and move on. Some people used to complain that 16-bit characters are a waste of memory, but even that isn't considered a big problem - essentially, the alternatives are out of sight and out of mind. (It would be interesting to see what happened in a language where the standard string representation is UTF-8. Given that Unicode requires a minimum of three bytes for a codepoint nowadays, the UTF-16 advantage of "character count = storage cell count" has vanished anyway.) From hvriedel at gmail.com Mon Oct 3 07:45:28 2016 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 03 Oct 2016 09:45:28 +0200 Subject: [Haskell-cafe] Batteries included In-Reply-To: (Tillmann Rendel's message of "Sat, 1 Oct 2016 03:05:23 +0200") References: <3AB97729-78B8-41A2-AAEC-2B8E6EC02793@aatal-apotheke.de> <2a0307cf-245e-8eaa-532b-702d1c540886@informatik.uni-tuebingen.de> Message-ID: <87h98tkh9j.fsf@gmail.com> On 2016-10-01 at 03:05:23 +0200, Tillmann Rendel wrote: [...] > I'm aware why `cabal install gtk3` can neither install > gtk2hs-buildtools nor install the C library If you're referring to `cabal` not being able to solve for build-tools and installing them: that's being addressed in 'cabal new-build', right now latest cabal new-build can already install the well-known build-tools (alex, happy, etc...). If you want cabal to install C libraries, you have to package them as Cabal packages first. I did that e.g. with http://hackage.haskell.org/package/lzma-clib for Windows' sake, but it's still an unsatisfying workaround for lack of a proper system package management for C libraries in Windows. I hope that something like Chocolatey will become the de-facto standard on Windows in the foreseeable future. From jo at durchholz.org Mon Oct 3 08:37:21 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Mon, 3 Oct 2016 10:37:21 +0200 Subject: [Haskell-cafe] Batteries included In-Reply-To: <87h98tkh9j.fsf@gmail.com> References: <3AB97729-78B8-41A2-AAEC-2B8E6EC02793@aatal-apotheke.de> <2a0307cf-245e-8eaa-532b-702d1c540886@informatik.uni-tuebingen.de> <87h98tkh9j.fsf@gmail.com> Message-ID: <703a66ee-8f95-e965-a41f-8f60624be4e5@durchholz.org> Am 03.10.2016 um 09:45 schrieb Herbert Valerio Riedel: > On 2016-10-01 at 03:05:23 +0200, Tillmann Rendel wrote: > > [...] > > lack of > a proper system package management for C libraries in Windows. I hope > that something like Chocolatey will become the de-facto standard on > Windows in the foreseeable future. I woudn't hold my breath (yet). Packaging systems are barely mature enough to offer satisfactory support simplified use cases like single ABI (Linux package managers) or single programming language (Cabal, Maven, SBT). Throwing in mixed-language support can be made to work easily (even Maven does this, in a half-assed way), but making it work well is still a "need more experience" topic. Regards, Jo From damian.nadales at gmail.com Tue Oct 4 09:34:03 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Tue, 4 Oct 2016 11:34:03 +0200 Subject: [Haskell-cafe] Speakers for a kickoff meetup on functional programming. Message-ID: Hi, I'm planning on organizing a Meetup group about Functional Programming in Eindhoven, for next year. Currently there is very few people doing that in this area (according to the few jobs available here), and I would like to promote it. Next to that I would like to advertise Haskell, but I wanted to start with a more broader scope. My current employer is willing to sponsor this, and hence I was looking which speakers could be possible to bring for the big opening event. My first question is which speakers you know in the FP world that can clearly convey the ideas on "why functional programming matter for your company". The idea is to try to spark the interest of the different stakeholders in this city (and region). Second, I've seen some material here and there on FP propaganda, but I would like to know which ones are you favorites. Thanks a lot in advance, Damian. From trupill at gmail.com Tue Oct 4 09:54:12 2016 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Tue, 4 Oct 2016 11:54:12 +0200 Subject: [Haskell-cafe] Speakers for a kickoff meetup on functional programming. In-Reply-To: References: Message-ID: Hi, I am currently working in Utrecht, assisting in teaching FP at the university. We have a big research group on FP here, indeed, so I guess somebody could speak in such an event. I don't know if that helps in some way... Alejandro 2016-10-04 11:34 GMT+02:00 Damian Nadales : > Hi, > > I'm planning on organizing a Meetup group about Functional Programming > in Eindhoven, for next year. Currently there is very few people doing > that in this area (according to the few jobs available here), and I > would like to promote it. Next to that I would like to advertise > Haskell, but I wanted to start with a more broader scope. My current > employer is willing to sponsor this, and hence I was looking which > speakers could be possible to bring for the big opening event. > > My first question is which speakers you know in the FP world that can > clearly convey the ideas on "why functional programming matter for > your company". The idea is to try to spark the interest of the > different stakeholders in this city (and region). > > Second, I've seen some material here and there on FP propaganda, but I > would like to know which ones are you favorites. > > Thanks a lot in advance, > Damian. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From v.dijk.bas at gmail.com Tue Oct 4 11:26:00 2016 From: v.dijk.bas at gmail.com (Bas van Dijk) Date: Tue, 4 Oct 2016 13:26:00 +0200 Subject: [Haskell-cafe] Speakers for a kickoff meetup on functional programming. In-Reply-To: References: Message-ID: Hi Damian, I work for LumiGuide and we're based in Nijmegen (60km from Eindhoven). We've been using Haskell in production for over two years now. See my recent Google TechTalk on our use of functional programming to power our bicycle detection and guidance systems that we deploy all over the Netherlands: https://www.youtube.com/watch?v=IKznN_TYjZk I would be willing to give a talk on LumiGuide or any of the functional programming techniques that we're using (Haskell, Nix, GHCJS, haskell-opencv). Cheers, Bas On 4 October 2016 at 11:34, Damian Nadales wrote: > Hi, > > I'm planning on organizing a Meetup group about Functional Programming > in Eindhoven, for next year. Currently there is very few people doing > that in this area (according to the few jobs available here), and I > would like to promote it. Next to that I would like to advertise > Haskell, but I wanted to start with a more broader scope. My current > employer is willing to sponsor this, and hence I was looking which > speakers could be possible to bring for the big opening event. > > My first question is which speakers you know in the FP world that can > clearly convey the ideas on "why functional programming matter for > your company". The idea is to try to spark the interest of the > different stakeholders in this city (and region). > > Second, I've seen some material here and there on FP propaganda, but I > would like to know which ones are you favorites. > > Thanks a lot in advance, > Damian. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From zoran.bosnjak at via.si Tue Oct 4 11:51:04 2016 From: zoran.bosnjak at via.si (Zoran Bosnjak) Date: Tue, 04 Oct 2016 13:51:04 +0200 Subject: [Haskell-cafe] memory leak when using "forever" Message-ID: Hello all, can you please explain why does this simple program leak memory. But, if I replace loop2 with loop1 (that is: without using "forever"), then it does not leak. Is this a problem in "forever" implementation or am I misusing this function? In the sources of base: http://hackage.haskell.org/package/base-4.9.0.0/docs/src/Control.Monad.html#forever ... there is some mention of memory leak prevention, but it looks like something is not right in this case. import Control.Concurrent import Control.Monad import Control.Monad.Trans import Control.Monad.Trans.Reader import Control.Monad.Trans.State main :: IO () main = do --let loop1 = (liftIO $ threadDelay 1) >> loop1 let loop2 = forever (liftIO $ threadDelay 1) _ <- runStateT (runReaderT loop2 'a') 'b' return () regards, Zoran From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Tue Oct 4 17:19:52 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Tue, 4 Oct 2016 18:19:52 +0100 Subject: [Haskell-cafe] memory leak when using "forever" In-Reply-To: References: Message-ID: <20161004171952.GB15239@weber> On Tue, Oct 04, 2016 at 01:51:04PM +0200, Zoran Bosnjak wrote: > can you please explain why does this simple program leak memory. > > But, if I replace loop2 with loop1 (that is: without using "forever"), > then it does not leak. > > import Control.Concurrent > import Control.Monad > import Control.Monad.Trans > import Control.Monad.Trans.Reader > import Control.Monad.Trans.State > > main :: IO () > main = do > --let loop1 = (liftIO $ threadDelay 1) >> loop1 > let loop2 = forever (liftIO $ threadDelay 1) > > _ <- runStateT (runReaderT loop2 'a') 'b' > return () My results below. Looks like there's something wrong with *> for ReaderT and StateT. I get a stack overflow even with just _ <- runStateT loop () and just _ <- runReaderT loop () 8<--- import Control.Concurrent import Control.Monad import Control.Monad.Trans import Control.Monad.Trans.Reader import Control.Monad.Trans.State import Control.Applicative -- Fine forever0 a = let a' = a >> a' in a' -- Stack overflow forever1 a = let a' = a *> a' in a' -- Fine forever2 a = a >> forever2 a -- Stack overflow forever3 a = a *> forever3 a main :: IO () main = do let loop = forever3 (liftIO $ return ()) _ <- runStateT (runReaderT loop ()) () return () From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Tue Oct 4 17:56:48 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Tue, 4 Oct 2016 18:56:48 +0100 Subject: [Haskell-cafe] memory leak when using "forever" In-Reply-To: <20161004171952.GB15239@weber> References: <20161004171952.GB15239@weber> Message-ID: <20161004175648.GD15239@weber> On Tue, Oct 04, 2016 at 06:19:52PM +0100, Tom Ellis wrote: > On Tue, Oct 04, 2016 at 01:51:04PM +0200, Zoran Bosnjak wrote: > > can you please explain why does this simple program leak memory. > > > > But, if I replace loop2 with loop1 (that is: without using "forever"), > > then it does not leak. > > > > import Control.Concurrent > > import Control.Monad > > import Control.Monad.Trans > > import Control.Monad.Trans.Reader > > import Control.Monad.Trans.State > > > > main :: IO () > > main = do > > --let loop1 = (liftIO $ threadDelay 1) >> loop1 > > let loop2 = forever (liftIO $ threadDelay 1) > > > > _ <- runStateT (runReaderT loop2 'a') 'b' > > return () > > My results below. Looks like there's something wrong with *> for ReaderT > and StateT. This seems to be how it executes let loop = return () *> loop in loop in runReaderT loop () let loop = return () *> loop in loop in loop () let loop = (id <$ return ()) <*> loop in loop in loop () -- <*> for ReaderT in terms of <*> for m let loop = \r -> (id <$ return ()) r <*> loop r in loop in loop () let loop = \r -> (id <$ return ()) r <*> loop r in loop in (id <$ return ()) () <*> loop () let loop = \r -> (id <$ return ()) r <*> loop r in loop in ((fmap . const) id (return ())) () (loop ()) let loop = \r -> (id <$ return ()) r <*> loop r in loop in (fmap (const id) (return ())) () <*> loop () -- fmap for ReaderT m in terms of fmap for m let loop = \r -> (id <$ return ()) r <*> loop r in loop in (fmap (const id) . (return ())) () <*> loop () let loop = \r -> (id <$ return ()) r <*> loop r in loop in (\x -> fmap (const id) (return () x)) () <*> loop () let loop = \r -> (id <$ return ()) r <*> loop r in loop in fmap (const id) (return () ()) <*> loop () -- return for ReaderT in terms of return for m let loop = \r -> (id <$ return ()) r <*> loop r in loop in fmap (const id) (return ()) <*> loop () which then in IO I think becomes fmap (const id ()) (loop ()) so each time round the loop we add a redundant fmap (const id ()) on the front. Oh dear. Something needs fixing. I'm not sure what. We don't see the space leak in the Identity Applictave because fmap (const id) (return ()) <*> loop () is const id (return ()) (loop ()) which evaluates as (\x y -> x) id (return ()) (loop ()) id (loop ()) loop () Anyone who is at Haskell eXchange on Thursday and who is interested in working out how the above code executes can come to my talk! Tom From rahulmutt at gmail.com Wed Oct 5 04:31:05 2016 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Wed, 5 Oct 2016 10:01:05 +0530 Subject: [Haskell-cafe] Representing Hierarchies with Typeclasses Message-ID: Hi cafe, I want to embed Java class hierarchies in Haskell, but I am unable to find a clean way to do it. Assume the class hierarchy C derives from B derives from A. Partial Solution: --- type family Super (a :: *) :: * class Extends a b where supercast :: a -> b instance {-# INCOHERENT #-} (Class a, Class c, Super a ~ b, Super b ~ c) => Extends a c where data A data B data C type instance Super C = B type instance Super B = A instance Extends C B instance Extends B A -- This is fine and is successfully able to infer Extends C A, but it's redundant and cannot infer that Extends D A if we let Super D = C. Is there a way to only specify the parent-child relationship once and get GHC to infer the entire hierarchy without the use of UndecidableInstances? Other solutions I've tried which avoid redundancy cause an infinite loop in the context reduction step of GHC. Thanks, Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From rahulmutt at gmail.com Wed Oct 5 05:22:51 2016 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Wed, 5 Oct 2016 10:52:51 +0530 Subject: [Haskell-cafe] Representing Hierarchies with Typeclasses In-Reply-To: References: Message-ID: Taking inspiration from http://stackoverflow.com/questions/24775080/how-to-establish-an-ordering-between-types-in-haskell, this comes out to be the solution: --- type family Super (a :: *) :: * class Extends a b where supercast :: a -> b instances Extends a a instance {-# INCOHERENT #-} (Super a ~ b, Extends b c) => Extends a c --- But this solution does rely on UndecidableInstances, but since it works it's fine. On Wed, Oct 5, 2016 at 10:01 AM, Rahul Muttineni wrote: > Hi cafe, > > I want to embed Java class hierarchies in Haskell, but I am unable to find > a clean way to do it. > > Assume the class hierarchy C derives from B derives from A. > > Partial Solution: > --- > type family Super (a :: *) :: * > > class Extends a b where > supercast :: a -> b > > instance {-# INCOHERENT #-} (Class a, Class c, Super a ~ b, Super b ~ c) > => Extends a c where > > data A > data B > data C > > type instance Super C = B > type instance Super B = A > instance Extends C B > instance Extends B A > -- > > This is fine and is successfully able to infer Extends C A, but it's > redundant and cannot infer that Extends D A if we let Super D = C. Is there > a way to only specify the parent-child relationship once and get GHC to > infer the entire hierarchy without the use of UndecidableInstances? Other > solutions I've tried which avoid redundancy cause an infinite loop in the > context reduction step of GHC. > > Thanks, > Rahul Muttineni > -- Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From damian.nadales at gmail.com Wed Oct 5 06:14:54 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Wed, 5 Oct 2016 08:14:54 +0200 Subject: [Haskell-cafe] Speakers for a kickoff meetup on functional programming. In-Reply-To: References: Message-ID: Hi guys, Thanks a lot for your responses. Right now I'm trying to gather a list of speakers that could convey the importance of FP in industry to an audience not familiar with FP at all. I think Bas surely has a convincing story about it ;) What about the rest? (Jurriën, Alejandro) After the first couple of meetings I guess we can move onto more specific topics, but I given the low adoption of FP in this part of the Netherlands, it'd be nice to give the first talks an introductory flavor. It is also very important that in these talks newcomers to FP can have a feeling on whether FP could help with their "software crisis", reducing costs, build more maintainable software, more efficient, etc (also managing the expectations, mentioning aspects where FP does not fare that well, of course). As for the when, my idea is to start next year (currently I'm programming in Scala for food, and I'm devoting a lot of my free time to study Haskell and build a public portfolio that could allow me to make a living out of programming in Haskell). I'll keep you posted. Thanks once more! Damian. On Tue, Oct 4, 2016 at 1:26 PM, Bas van Dijk wrote: > Hi Damian, > > I work for LumiGuide and we're based in Nijmegen (60km from > Eindhoven). We've been using Haskell in production for over two years > now. See my recent Google TechTalk on our use of functional > programming to power our bicycle detection and guidance systems that > we deploy all over the Netherlands: > > https://www.youtube.com/watch?v=IKznN_TYjZk > > I would be willing to give a talk on LumiGuide or any of the > functional programming techniques that we're using (Haskell, Nix, > GHCJS, haskell-opencv). > > Cheers, > > Bas > > On 4 October 2016 at 11:34, Damian Nadales wrote: >> Hi, >> >> I'm planning on organizing a Meetup group about Functional Programming >> in Eindhoven, for next year. Currently there is very few people doing >> that in this area (according to the few jobs available here), and I >> would like to promote it. Next to that I would like to advertise >> Haskell, but I wanted to start with a more broader scope. My current >> employer is willing to sponsor this, and hence I was looking which >> speakers could be possible to bring for the big opening event. >> >> My first question is which speakers you know in the FP world that can >> clearly convey the ideas on "why functional programming matter for >> your company". The idea is to try to spark the interest of the >> different stakeholders in this city (and region). >> >> Second, I've seen some material here and there on FP propaganda, but I >> would like to know which ones are you favorites. >> >> Thanks a lot in advance, >> Damian. >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. From jonathangfischoff at gmail.com Wed Oct 5 10:59:22 2016 From: jonathangfischoff at gmail.com (Jonathan Fischoff) Date: Wed, 5 Oct 2016 06:59:22 -0400 Subject: [Haskell-cafe] [ANN] wrecker - An HTTP Benchmarker Message-ID: I am happy to announce the release of 'wrecker-0.1.3.2', a library for HTTP benchmarks. 'wrecker' makes it easy to benchmark complex API interactions, by providing a 'wreq' like interface for creating suites of HTTP benchmarks. For more detailed information, tutorials, and examples checkout the README.md https://github.com/skedgeme/wrecker/blob/master/README.md Additionally there is documentation on Hackage: https://hackage.haskell.org/package/wrecker. Cheers, Jonathan Fischoff -------------- next part -------------- An HTML attachment was scrubbed... URL: From hon.lianhung at gmail.com Wed Oct 5 12:32:50 2016 From: hon.lianhung at gmail.com (Lian Hung Hon) Date: Wed, 5 Oct 2016 20:32:50 +0800 Subject: [Haskell-cafe] Recursive attoparsec Message-ID: Dear cafe, Given data Expression = ExpToken String | ExpAnd Expression Expression How to write an attoparsec parser that parses this example: "a" and "b" and "c" into ExpAnd (ExpToken "a") (ExpAnd (ExpToken "b") (ExpToken "c"))? Regards, Hon -------------- next part -------------- An HTML attachment was scrubbed... URL: From defigueiredo at ucdavis.edu Wed Oct 5 13:55:55 2016 From: defigueiredo at ucdavis.edu (Dimitri DeFigueiredo) Date: Wed, 5 Oct 2016 07:55:55 -0600 Subject: [Haskell-cafe] Representing Hierarchies with Typeclasses Message-ID: <819cb095-ae49-73f6-2acf-3d776e98e186@ucdavis.edu> I have looked at how to embedded object oriented systems into haskell for practical reasons. By far the best resource I found was Oleg Kiselyov and Ralph Lämmel's work: https://arxiv.org/pdf/cs/0509027.pdf It shows the multiple possibilities. I have used the idea of having a super class be a polymorphic type with a tail. data Point s = Pt { x :: Int, y:: Int, tail :: s} And then specializing the parameter s into your derived class type Radius = Int type Circle = Point Radius many times. But *be warned*, I try to avoid object hierarchies like the plague! They lead to code that is not reusable. You may want to consider other simpler possibilities. Here is one of my earlier experiments that may be useful to you (I no longer have the views expressed there). https://github.com/dimitri-xyz/inheritance-in-haskell Cheers, Dimitri -- 2E45 D376 A744 C671 5100 A261 210B 8461 0FB0 CA1F From spam at scientician.net Wed Oct 5 14:12:08 2016 From: spam at scientician.net (Bardur Arantsson) Date: Wed, 5 Oct 2016 16:12:08 +0200 Subject: [Haskell-cafe] Representing Hierarchies with Typeclasses In-Reply-To: References: Message-ID: On 2016-10-05 06:31, Rahul Muttineni wrote: > Hi cafe, > > I want to embed Java class hierarchies in Haskell, but I am unable to > find a clean way to do it. > Is this purely an academic exercise, or are you trying to solve some higher-level ("real") problem? If it's the latter then it might be a better idea to describe the overall problem you're trying to solve. (I.e. this may be an instance of the "XY Problem".) Regards, From rahulmutt at gmail.com Wed Oct 5 16:05:20 2016 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Wed, 5 Oct 2016 21:35:20 +0530 Subject: [Haskell-cafe] Representing Hierarchies with Typeclasses In-Reply-To: References: Message-ID: Hi Bardur, The hierarchy will be used for FFI in GHCVM - my effort at bringing the Haskell to the JVM. My goal has been to make FFI Haskell code look pretty close to Java code and thereby not making it intimidating for newcomers coming from Java, while demonstrating that monads are powerful enough to embed a Java-like language inside of Haskell without using types that would be confusing to a newcomer - like the ST monad and its use of rank-2 types. Currently, if you want to store raw Java objects inside of Haskell in GHCVM, you do: ```haskell data {-# CLASS "java.lang.String" #-} JString = JString (Object# JString) ``` Note that Object# (c :: *) :: # is a new primitive type constructor I introduced to allow raw Java objects to be stored in data types. It's only type-safe to do this if the underlying object is immutable or locally immutable (doesn't change much during the time of use). This above type definition, while confusing, is succinct and kills two birds with one stone: 1) It allows JString to be used as a 'tag type' that stores metadata on the class of the underlying object it stores. 2) It allows JString to be used as a boxed representation of a raw Java object, just as Int is a boxed version of Int#. There's also the Java monad: ```haskell newtype Java c a = Java (Object# c -> (# Object# c, a #)) ``` The c is the tag type - essentially it determines the underlying representation of the threaded 'this' pointer. This is a special monad recognized by GHCVM. The final goal is to be able to import methods from java without doing manual casting at the Haskell level - it uses the Extends typeclass to handle that for you. For example, assume you need to import the java.lang.Object.toString() method from Java. Obviously, you wouldn't want to re-import this method for every possible Java class you ever interact with. Instead, you would import it like so: ```haskell data {-# CLASS "java.lang.Object" #-} Object = Object (Object# Object) type instance Super JString = Object foreign import java unsafe "toString" toJString :: Extends a Object => Java a JString getStringFromString :: Java JString JString getStringFromString = toJString getStringFromObject :: Java Object JString getStringFromObject = toJString ``` So this allows for reuse of a single foreign import in multiple particular cases as shown above. To see a more "real-world" example of this in action, check out the example JavaFX project that compiles with GHCVM [1]. I'm open to suggestions on better ways to accomplish the same goal. Thanks, Rahul [1] https://github.com/rahulmutt/ghcvm-javafx On Wed, Oct 5, 2016 at 7:42 PM, Bardur Arantsson wrote: > On 2016-10-05 06:31, Rahul Muttineni wrote: > > Hi cafe, > > > > I want to embed Java class hierarchies in Haskell, but I am unable to > > find a clean way to do it. > > > > Is this purely an academic exercise, or are you trying to solve some > higher-level ("real") problem? > > If it's the latter then it might be a better idea to describe the > overall problem you're trying to solve. (I.e. this may be an instance of > the "XY Problem".) > > Regards, > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From rahulmutt at gmail.com Wed Oct 5 16:07:25 2016 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Wed, 5 Oct 2016 21:37:25 +0530 Subject: [Haskell-cafe] Representing Hierarchies with Typeclasses In-Reply-To: <819cb095-ae49-73f6-2acf-3d776e98e186@ucdavis.edu> References: <819cb095-ae49-73f6-2acf-3d776e98e186@ucdavis.edu> Message-ID: Hi Dimitri, Thanks for the link! I'll take a look. The goal of doing this was to make foreign Java imports reusable for subclasses in GHCVM. Thanks, Rahul On Wed, Oct 5, 2016 at 7:25 PM, Dimitri DeFigueiredo < defigueiredo at ucdavis.edu> wrote: > I have looked at how to embedded object oriented systems into haskell > for practical reasons. > By far the best resource I found was Oleg Kiselyov and Ralph Lämmel's work: > > https://arxiv.org/pdf/cs/0509027.pdf > > It shows the multiple possibilities. > I have used the idea of having a super class be a polymorphic type with > a tail. > > data Point s = Pt { x :: Int, y:: Int, tail :: s} > > And then specializing the parameter s into your derived class > > type Radius = Int > type Circle = Point Radius > > many times. > > But *be warned*, I try to avoid object hierarchies like the plague! They > lead to code that is not reusable. > You may want to consider other simpler possibilities. Here is one of my > earlier experiments that may be useful to you (I no longer have the > views expressed there). > > https://github.com/dimitri-xyz/inheritance-in-haskell > > Cheers, > > Dimitri > > -- > 2E45 D376 A744 C671 5100 A261 210B 8461 0FB0 CA1F > > > -- Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From trebla at vex.net Wed Oct 5 16:15:07 2016 From: trebla at vex.net (Albert Y. C. Lai) Date: Wed, 5 Oct 2016 12:15:07 -0400 Subject: [Haskell-cafe] Recursive attoparsec In-Reply-To: References: Message-ID: On 2016-10-05 08:32 AM, Lian Hung Hon wrote: > Given > > data Expression = ExpToken String | ExpAnd Expression Expression > > How to write an attoparsec parser that parses this example: > > "a" and "b" and "c" > > into > > ExpAnd (ExpToken "a") (ExpAnd (ExpToken "b") (ExpToken "c"))? Consider using sepBy1 to obtain ["a", "b", "c"] first. Then you're just a foldr away. From mblazevic at stilo.com Wed Oct 5 16:57:18 2016 From: mblazevic at stilo.com (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Wed, 5 Oct 2016 12:57:18 -0400 Subject: [Haskell-cafe] Recursive attoparsec In-Reply-To: References: Message-ID: On 2016-10-05 08:32 AM, Lian Hung Hon wrote: > Dear cafe, > > Given > > data Expression = ExpToken String | ExpAnd Expression Expression > > How to write an attoparsec parser that parses this example: > > "a" and "b" and "c" > > into > > ExpAnd (ExpToken "a") (ExpAnd (ExpToken "b") (ExpToken "c"))? > You can use recursion at Haskell level: expParser = do left <- ExpToken <$> stringLiteral (string "|" *> (ExpAnd left <$> expParser) <|> pure left) From leo at woerteler.de Wed Oct 5 17:05:47 2016 From: leo at woerteler.de (=?UTF-8?Q?Leonard_W=c3=b6rteler?=) Date: Wed, 5 Oct 2016 19:05:47 +0200 Subject: [Haskell-cafe] Recursive attoparsec In-Reply-To: References: Message-ID: <7f634737-be05-4403-89b4-105109be39f9@woerteler.de> Am 05.10.2016 um 14:32 schrieb Lian Hung Hon: > Given > > data Expression = ExpToken String | ExpAnd Expression Expression > > How to write an attoparsec parser that parses this example: > > "a" and "b" and "c" > > into > > ExpAnd (ExpToken "a") (ExpAnd (ExpToken "b") (ExpToken "c"))? You can re-implement `chainr1` from Parsec as follows: chainr1 :: Parser a -> Parser (a -> a -> a) -> Parser a chainr1 p op = scan where scan = p >>= rest rest x = op <*> pure x <*> scan <|> return x Then you just plug in your parsers for variables and the `and` operator. Working example attached. -- Leo -------------- next part -------------- {-# LANGUAGE OverloadedStrings #-} import Data.Attoparsec.ByteString.Char8 ( Parser, parseOnly, char8, string, anyChar, skipSpace, manyTill, endOfInput ) import Control.Applicative ((<|>)) data Expression = ExpToken String | ExpAnd Expression Expression deriving (Eq, Show) main :: IO () main = print $ parseOnly expression "\"a\" and \"b\" and \"c\"" expression :: Parser Expression expression = chainr1 varP andP <* skipSpace <* endOfInput chainr1 :: Parser a -> Parser (a -> a -> a) -> Parser a chainr1 p op = scan where scan = p >>= rest rest x = op <*> pure x <*> scan <|> return x varP :: Parser Expression varP = ExpToken <$> (skipSpace *> char8 '"' *> manyTill anyChar (char8 '"')) andP :: Parser (Expression -> Expression -> Expression) andP = skipSpace *> string "and" *> pure ExpAnd From defigueiredo at ucdavis.edu Wed Oct 5 18:26:51 2016 From: defigueiredo at ucdavis.edu (Dimitri DeFigueiredo) Date: Wed, 5 Oct 2016 12:26:51 -0600 Subject: [Haskell-cafe] Why did the number of LOC in GHC nearly triple in 2014!? Message-ID: Hello folks, I was looking at this: https://www.openhub.net/p/ghc/analyses/latest/languages_summary And was really surprised by the graph. Is this correct!? Did the complexity of GHC's codebase significantly increase in 2014? Can anybody shed a light on why that is? Cheers, Dimitri -- 2E45 D376 A744 C671 5100 A261 210B 8461 0FB0 CA1F From ekmett at gmail.com Wed Oct 5 18:31:08 2016 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 5 Oct 2016 14:31:08 -0400 Subject: [Haskell-cafe] Why did the number of LOC in GHC nearly triple in 2014!? In-Reply-To: References: Message-ID: If I had to guess, I'd say that was likely around when Austin was playing with the way we handle submodules. There was a bunch of churn on that front in 2013-2014 or so. -Edward On Wed, Oct 5, 2016 at 2:26 PM, Dimitri DeFigueiredo < defigueiredo at ucdavis.edu> wrote: > Hello folks, > > I was looking at this: > > https://www.openhub.net/p/ghc/analyses/latest/languages_summary > > And was really surprised by the graph. Is this correct!? > Did the complexity of GHC's codebase significantly increase in 2014? > Can anybody shed a light on why that is? > > Cheers, > > Dimitri > > -- > 2E45 D376 A744 C671 5100 A261 210B 8461 0FB0 CA1F > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukewm at riseup.net Wed Oct 5 22:39:19 2016 From: lukewm at riseup.net (Luke Murphy) Date: Thu, 6 Oct 2016 00:39:19 +0200 Subject: [Haskell-cafe] Contribute to Stack in Llubjana 14th October In-Reply-To: <9f826552-37d0-8b43-268f-3bd74fd108d6@riseup.net> References: <9f826552-37d0-8b43-268f-3bd74fd108d6@riseup.net> Message-ID: Hi all, I am co-organizing a Haskell event in Llubjana and trying to get the word out. We'd like to come together and work on contributing to Stack together. We'll specifically tackle the issues labeled `newcomer`. It's on the 14th of October 2016, in Llubjana, Slovenia. You can find details over at Github or Meetup: - https://github.com/haskellpeople/stack - https://www.meetup.com/Ljubljana-Lambdas/events/234646552/ This may be last minute (it is), but if anyone is close, please feel free to join us. Best, Luke From lukewm at riseup.net Wed Oct 5 22:48:36 2016 From: lukewm at riseup.net (Luke Murphy) Date: Thu, 6 Oct 2016 00:48:36 +0200 Subject: [Haskell-cafe] Contribute to Stack in Llubjana 14th October In-Reply-To: References: <9f826552-37d0-8b43-268f-3bd74fd108d6@riseup.net> Message-ID: <4b395cf1-a603-18a3-bd69-c9e2ed905ab2@riseup.net> Oh darn, I spelt the city wrong. That would be Ljubljana. Apologies ;) On 06.10.2016 00:39, Luke Murphy wrote: > Hi all, > > I am co-organizing a Haskell event in Llubjana and trying to get the > word out. We'd like to come together and work on contributing to Stack > together. > We'll specifically tackle the issues labeled `newcomer`. > > It's on the 14th of October 2016, in Llubjana, Slovenia. > > You can find details over at Github or Meetup: > > - https://github.com/haskellpeople/stack > - https://www.meetup.com/Ljubljana-Lambdas/events/234646552/ > > This may be last minute (it is), but if anyone is close, please feel > free to join us. > > Best, > > Luke > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From kiwamu at debian.or.jp Thu Oct 6 02:46:41 2016 From: kiwamu at debian.or.jp (Kiwamu Okabe) Date: Thu, 6 Oct 2016 11:46:41 +0900 Subject: [Haskell-cafe] History of Ajhc Haskell compiler Message-ID: Hi all, Ajhc Haskell compiler [0] project is shutdown. However we would like to show the knowledge. Please read Metasepi Foundation report in Japanese [1], that includes detail of technical information around Ajhc's runtime and GC. However you may hardly read it, because written by Japanese language. If you would like to automatically translate it by Google translate, this source code of the document [2] may be useful. We are welcome that you contribute us to translate it into English. But this document is a technical report as fiction novel style. This document has many "tentacles" words, because the novel part is parody of Squid Girl [3]. Please relax to read it like reading some fiction novel. Best regards, [0] http://ajhc.metasepi.org/ [1] http://metasepi.org/doc/c84-metasepi-foundation-ja.pdf [2] https://gist.github.com/master-q/733dfa2cf008261a3849564c38a7933b [3] https://en.wikipedia.org/wiki/Squid_Girl -- Kiwamu Okabe at METASEPI DESIGN From mgsloan at gmail.com Thu Oct 6 03:21:29 2016 From: mgsloan at gmail.com (Michael Sloan) Date: Wed, 5 Oct 2016 20:21:29 -0700 Subject: [Haskell-cafe] Contribute to Stack in Llubjana 14th October In-Reply-To: References: <9f826552-37d0-8b43-268f-3bd74fd108d6@riseup.net> Message-ID: Awesome! Looking forward to collaborating with y'all. I think I should be able to be available to help with your stack hacking efforts. I suppose something like 9am to 5pm on the 14th? So for me in Seattle, that is midnight into the AM. As a sometimes insomniac, sounds like a fun excuse to stay up late! Also, I've wanted to go to Slovenia ever since I spent some time in Croatia - met some excellent Slovenians at a music festival there! Thanks for organizing event like this - exciting stuff! -Michael On Wed, Oct 5, 2016 at 3:39 PM, Luke Murphy wrote: > Hi all, > > I am co-organizing a Haskell event in Llubjana and trying to get the > word out. We'd like to come together and work on contributing to Stack > together. > We'll specifically tackle the issues labeled `newcomer`. > > It's on the 14th of October 2016, in Llubjana, Slovenia. > > You can find details over at Github or Meetup: > > - https://github.com/haskellpeople/stack > - https://www.meetup.com/Ljubljana-Lambdas/events/234646552/ > > This may be last minute (it is), but if anyone is close, please feel > free to join us. > > Best, > > Luke > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From frantisek at farka.eu Thu Oct 6 10:17:57 2016 From: frantisek at farka.eu (=?utf-8?Q?Franti=C5=A1ek?= Farka) Date: Thu, 6 Oct 2016 11:17:57 +0100 Subject: [Haskell-cafe] Reminder: Workshop on Coalgebra, Horn Clause Logic, and Types Message-ID: <20161006101757.GA1938@farka.eu> Reminder: Call for Papers, Presentations and Participation Workshop on Coalgebra, Horn Clause Logic Programming and Types 28-29 November 2016, Edinburgh, UK https://ff32.host.cs.st-andrews.ac.uk/coalpty16/ Abstract submission: 15 October, 2016 Registration deadline: 1 November, 2016 ==================================================== Objectives and scope ------------------- The workshop marks the end of the EPSRC Grant Coalgebraic Logic Programming for Type Inference, by K. Komendantskaya and J. Power and will consist of two parts: Part 1 - Semantics: Lawvere theories and Coalgebra in Logic and Functional Programming Part 2 - Programming languages: Horn Clause Logic for Type Inference in Functional Languages and Beyond We invite all colleagues working in related areas to present and share their results. We envisage a friendly meeting with many stimulating discussions, and therefore welcome presentations of already published research as well as novel results. Authors of original contributions will be invited to submit their papers to EPTCS post-proceedings. We especially encourage early career researchers to present and participate. Venue ----- The workshop will be held at the International Center for Mathematical Sciences, in Edinburgh city center, just 2 minutes walk from the Informatics Forum. Invited speakers and tutorials ------------------------------ Theory: * Logic programming: laxness and saturation John Power, University of Bath, UK * Comodels and interaction Tarmo Uustalu, Tallinn University of Technology, Estonia. * Refinement Types and Higher-Order Constrained Horn Clauses Steven Ramsay and Luke Ong, University of Oxford, UK Applications and Implementations: * Abstract compilation for type analysis of OO languages Davide Ancona, University of Genoa, Italy * Classes for the masses Claudio Russo, Microsoft Research Cambridge, UK * Relational specification of type systems using Logic Programming Ki Yung Ahn, Nanyang Technological University, Singapore Proceedings publication ----------------------- Presentations: We invite submission of 2-page extended abstracts via Easychair, by the 15th October 2016. These will be subject to light review process. Preliminary proceedings will be made available at the conference in electronic form. Post-proceedings: Authors presenting original work will be invited to submit full papers to the post-proceedings of the workshop. The post-proceedings volume will be published in Electronic Proceedings in Theoretical Computer Science and peer-reviewed according to EPTCS standards by the PC members. Important dates --------------- Extended Abstract Submission: 15 October, 2016 Author notification: 25 October, 2016 Workshop registration: 1 November, 2016 Workshop: 28–29 November, 2016 EPTCS proceedings invitations: 15 December, 2016 EPTCS final version submission: 30 January, 2017 Programme committee ------------------- Ki Yung Ahn, Nanyang Technological University, Singapore Davide Ancona, University of Genoa, Italy Filippo Bonchi, CNRS, ENS de Lyon, France Iavor Diatchki, Galois, Inc, USA Peng Fu, Heriot-Watt University, Edinburgh, UK Neil Ghani, University of Strathclyde, UK Patricia Johann, Appalachian State University, USA Ekaterina Komendantskaya, Heriot-Watt University, Edinburgh, UK Clemens Kupke, University of Strathclyde, UK J. Garrett Morris, University of Edinburgh, UK Fredrik Nordvall Forsberg, University of Strathclyde, UK John Power, University of Bath, UK Claudio Russo, Microsoft Research Cambridge, UK Martin Schmidt, DHBW Stuttgart and Osnabrück University , Germany Stephan Schulz, DHBW Stuttgart, Germany Aaron Stump, The University of Iowa, USA Niki Vazou, University of California, San Diego, USA Joe Wells, Heriot-Watt University, Edinburgh, UK Fabio Zanasi, Radboud University of Nijmegen, The Netherlands Workshop chairs -------- Ekaterina Komendantskaya, Heriot-Watt University, UK John Power, University of Bath, UK Publicity chair --------------- František Farka, University of Dundee, UK and University of St Andrews, UK -- František Farka From hon.lianhung at gmail.com Thu Oct 6 13:39:35 2016 From: hon.lianhung at gmail.com (Lian Hung Hon) Date: Thu, 6 Oct 2016 21:39:35 +0800 Subject: [Haskell-cafe] Recursive attoparsec In-Reply-To: <7f634737-be05-4403-89b4-105109be39f9@woerteler.de> References: <7f634737-be05-4403-89b4-105109be39f9@woerteler.de> Message-ID: Dear guys, Thanks! chainr and chainl are exactly what I was looking for. I did something along the lines of andParser = ExpAnd <$> ((stringParser <|> andParser) <* "and") <*> (stringParser <|> andParser) I can see now, why that wouldn't work! Regards, Hon On 6 October 2016 at 01:05, Leonard Wörteler wrote: > > > Am 05.10.2016 um 14:32 schrieb Lian Hung Hon: > >> Given >> >> data Expression = ExpToken String | ExpAnd Expression Expression >> >> How to write an attoparsec parser that parses this example: >> >> "a" and "b" and "c" >> >> into >> >> ExpAnd (ExpToken "a") (ExpAnd (ExpToken "b") (ExpToken "c"))? >> > > You can re-implement `chainr1` from Parsec as follows: > > chainr1 :: Parser a -> Parser (a -> a -> a) -> Parser a > chainr1 p op = scan > where > scan = p >>= rest > rest x = op <*> pure x <*> scan <|> return x > > Then you just plug in your parsers for variables and the `and` operator. > Working example attached. > > -- Leo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ariep at xs4all.nl Thu Oct 6 16:27:51 2016 From: ariep at xs4all.nl (Arie Peterson) Date: Thu, 06 Oct 2016 18:27:51 +0200 Subject: [Haskell-cafe] Cross-compiling network package Message-ID: <39889377.Ov5vZAKjDx@pook> Can the network package be cross-compiled? I'm running into problems because the preprocessor cannot handle some of the directives that network uses when it's working in cross-compilation mode. Specifically, this part gives trouble: > #if __GLASGOW_HASKELL__ < 800 > #let alignment t = "%lu", (unsigned long)offsetof(struct {char x__; t (y__); }, y__) > #endif > > instance Storable In6Addr where > > sizeOf _ = #const sizeof(struct in6_addr) > > alignment _ = #alignment struct in6_addr In my first attempt, with ghc-7.10, this failed because the "let" directive is not supported when cross-compiling. Looking at the above code I thought I'd try ghc-8.0, but as it turns out the "alignment" directive is also not supported when cross-compiling... Any ideas? I'm compiling from x86_64 to arm-linux-gnueabihf BTW, to run my haskell program on the raspberry pi (3). From damian.nadales at gmail.com Fri Oct 7 10:56:17 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Fri, 7 Oct 2016 12:56:17 +0200 Subject: [Haskell-cafe] [ANN] wrecker - An HTTP Benchmarker In-Reply-To: References: Message-ID: We're using Gatling at work http://gatling.io/docs/2.2.2/ (Scala based), and I'm looking for similar tools in Haskell (if they don't exist is a nice project to work on I find). Do you have any idea why `wreck` produces inflated results? On Wed, Oct 5, 2016 at 12:59 PM, Jonathan Fischoff wrote: > I am happy to announce the release of 'wrecker-0.1.3.2', a library for HTTP > benchmarks. > > 'wrecker' makes it easy to benchmark complex API interactions, by providing > a 'wreq' like interface for creating suites of HTTP benchmarks. > > For more detailed information, tutorials, and examples checkout the > README.md > > https://github.com/skedgeme/wrecker/blob/master/README.md > > Additionally there is documentation on Hackage: > https://hackage.haskell.org/package/wrecker. > > Cheers, > Jonathan Fischoff > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From jonathangfischoff at gmail.com Fri Oct 7 14:09:05 2016 From: jonathangfischoff at gmail.com (Jonathan Fischoff) Date: Fri, 7 Oct 2016 10:09:05 -0400 Subject: [Haskell-cafe] [ANN] wrecker - An HTTP Benchmarker In-Reply-To: References: Message-ID: hvr's comment on reddit sums up the issues well: https://www.reddit.com/r/haskell/comments/55ywoy/ann_wrecker_an_http_benchmarking_library/d8f0svb `http-client` is not designed for this use case. It's header parsing is too slow specifically. `wrecker` and `wreck` are fine in the 10-100 range for typical requests (30 - 300 ms) `wrecker` can and will get better (possibly with uhttpc under the hood), but if you have to sequence API together with processing between to profile your API, `wrecker` is a great option because there is nothing as accurate (that I know of) that can do that well. On Fri, Oct 7, 2016 at 6:56 AM, Damian Nadales wrote: > We're using Gatling at work http://gatling.io/docs/2.2.2/ (Scala > based), and I'm looking for similar tools in Haskell (if they don't > exist is a nice project to work on I find). > > Do you have any idea why `wreck` produces inflated results? > > On Wed, Oct 5, 2016 at 12:59 PM, Jonathan Fischoff > wrote: > > I am happy to announce the release of 'wrecker-0.1.3.2', a library for > HTTP > > benchmarks. > > > > 'wrecker' makes it easy to benchmark complex API interactions, by > providing > > a 'wreq' like interface for creating suites of HTTP benchmarks. > > > > For more detailed information, tutorials, and examples checkout the > > README.md > > > > https://github.com/skedgeme/wrecker/blob/master/README.md > > > > Additionally there is documentation on Hackage: > > https://hackage.haskell.org/package/wrecker. > > > > Cheers, > > Jonathan Fischoff > > > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From damian.nadales at gmail.com Fri Oct 7 16:12:44 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Fri, 7 Oct 2016 18:12:44 +0200 Subject: [Haskell-cafe] [ANN] wrecker - An HTTP Benchmarker In-Reply-To: References: Message-ID: Good that the bottle neck is clearly identified. On a related topic, we're starting to benchmark our web services, and we're using http://gatling.io as benchmarking tool. It is Scala based and I'd like to write the benchmarking program in Haskell instead. However I need a tool that can record the duration of each requests, and maybe aggregate them and export this to some format (I don't care about the plots right now). I don't know of wrecker has this capabilities or maybe there are other tools that the Haskell community use for this purpose. Any hints? Op 7 okt. 2016 16:09 schreef "Jonathan Fischoff" < jonathangfischoff at gmail.com>: > hvr's comment on reddit sums up the issues well: https://www.reddit.com/ > r/haskell/comments/55ywoy/ann_wrecker_an_http_benchmarking_library/d8f0svb > > `http-client` is not designed for this use case. It's header parsing is > too slow specifically. > > `wrecker` and `wreck` are fine in the 10-100 range for typical requests > (30 - 300 ms) > > `wrecker` can and will get better (possibly with uhttpc under the hood), > but if you have to sequence API together with processing between to profile > your API, `wrecker` is a great option because there is nothing as accurate > (that I know of) that can do that well. > > > > On Fri, Oct 7, 2016 at 6:56 AM, Damian Nadales > wrote: > >> We're using Gatling at work http://gatling.io/docs/2.2.2/ (Scala >> based), and I'm looking for similar tools in Haskell (if they don't >> exist is a nice project to work on I find). >> >> Do you have any idea why `wreck` produces inflated results? >> >> On Wed, Oct 5, 2016 at 12:59 PM, Jonathan Fischoff >> wrote: >> > I am happy to announce the release of 'wrecker-0.1.3.2', a library for >> HTTP >> > benchmarks. >> > >> > 'wrecker' makes it easy to benchmark complex API interactions, by >> providing >> > a 'wreq' like interface for creating suites of HTTP benchmarks. >> > >> > For more detailed information, tutorials, and examples checkout the >> > README.md >> > >> > https://github.com/skedgeme/wrecker/blob/master/README.md >> > >> > Additionally there is documentation on Hackage: >> > https://hackage.haskell.org/package/wrecker. >> > >> > Cheers, >> > Jonathan Fischoff >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > To (un)subscribe, modify options or view archives go to: >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > Only members subscribed via the mailman list are allowed to post. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From winterkoninkje at gmail.com Sun Oct 9 05:56:55 2016 From: winterkoninkje at gmail.com (wren romano) Date: Sat, 8 Oct 2016 22:56:55 -0700 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: <1475403477332.93922@didichuxing.com> References: <1475403477332.93922@didichuxing.com> Message-ID: On Sun, Oct 2, 2016 at 3:17 AM, 韩冬(基础平台部) wrote: > Hi wren! > > Yes, i noticed that attoparsec's numeric parsers are slow. I have a benchmark set to compare attoparsec and binary-parsers on different sample JSON files, it's on github: https://github.com/winterland1989/binary-parsers. > > I'm pretty sure bytestring-lexing helped a lot, for example, the average decoding speed improvement is around 20%, but numeric only benchmarks(integers and numbers) improved by 30% ! So still some substantial gains for non-numeric stuff, nice! > Parsing is just a part of JSON decoding, lots of time is spent on unescaping, .etc. So the parser's improvement is quite large IMHO. > > BTW, can you provide a version of lexer which doesn't check whether a Word is a digit? In binary-parsers i use something like `takeWhile isDigit` to extract the input ByteString, so there's no need to verify this in lexer again. Maybe we can have another performance improvement. I suppose I could, but then it wouldn't be guaranteed to return correct answers. The way things are set up now, the intended workflow is that wherever you're expecting a number, you should just hand the ByteString over to bytestring-lexing (i.e., not bother scanning/pre-lexing via `takeWhile isDigit`) and it'll give back the answer together with the remainder of the input. This ensures that you don't need to do two passes over the characters. So, for Attoparsec itself you'd wrap it up with something like: decimal :: Integral a => Parser a decimal = get >>= \bs -> case readDecimal bs of Nothing -> fail "error message" Just (a, bs') -> put bs' >> return a Alas `get` isn't exported[1], but you get the idea. Of course, for absolute performance you may want to inline all the combinators to see if there's stuff you can get rid of. The only reason for scanning ahead is in case you're dealing with lazy bytestrings and so need to glue them together in order to use bytestring-lexing. Older versions of the library did have support for lazy bytestrings, but I removed it because it was bitrotten and unused. But if you really need it, I can add new variants of the lexers for dealing with the possibility of requesting new data when the input runs out. [1] -- Live well, ~wren From mihai.maruseac at gmail.com Sun Oct 9 21:13:10 2016 From: mihai.maruseac at gmail.com (Mihai Maruseac) Date: Sun, 9 Oct 2016 14:13:10 -0700 Subject: [Haskell-cafe] Call for Contributions - Haskell Communities and Activities Report, November 2016 edition (31th edition) Message-ID: Dear all, We would like to collect contributions for the 31th edition of the ============================================================ Haskell Communities & Activities Report http://www.haskell.org/haskellwiki/Haskell_Communities_and_Activities_Report Submission deadline: 30 October 2016 (please send your contributions to hcar at haskell.org, in plain text or LaTeX format, both are equally accepted) ============================================================ This is the short story: * If you are working on any project that is in some way related to Haskell, please write a short entry and submit it. Even if the project is very small or unfinished or you think it is not important enough --- please reconsider and submit an entry anyway! * If you are interested in an existing project related to Haskell that has not previously been mentioned in the HCAR, please tell us, so that we can contact the project leaders and ask them to submit an entry. * If you are working on a project that is looking for contributors, please write a short entry and submit it, mentioning that your are looking for contributors. * Feel free to pass on this call for contributions to others that might be interested. More detailed information: The Haskell Communities & Activities Report is a bi-annual overview of the state of Haskell as well as Haskell-related projects over the last, and possibly the upcoming six months. If you have only recently been exposed to Haskell, it might be a good idea to browse the previous edition --- you will find interesting projects described as well as several starting points and links that may provide answers to many questions. Contributions will be collected until the submission deadline. They will then be compiled into a coherent report that is published online as soon as it is ready. As always, this is a great opportunity to update your webpages, make new releases, announce or even start new projects, or to talk about developments you want every Haskeller to know about! Looking forward to your contributions, Mihai Maruseac FAQ: Q: What format should I write in? A: The usual format is a LaTeX source file, adhering to the template that is available at: http://haskell.org/communities/11-2016/template.tex There is also a LaTeX style file at http://haskell.org/communities/11-2016/hcar.sty that you can use to preview your entry. If you do not know LaTeX or don't want to use it or don't have time to translate your entry into it, then please use plain text, it is better to have an entry in plain-text which we will translate than not have it at all. If you modify an old entry that you have written for an earlier edition of the report, you should soon receive your old entry as a template (provided we have your valid email address). Please modify that template, rather than using your own version of the old entry as a template. Q: Can I include Haskell code? A: Yes. Please use lhs2tex syntax (http://www.andres-loeh.de/lhs2tex/). The report is compiled in mode polycode.fmt. Q: Can I include images? A: Yes, you are even encouraged to do so. Please use .jpg or .png format, then. PNG is preferred for simplicity. Q: Should I send files in .zip archives or similar? A: No, plain file attachments are the way. Q: How much should I write? A: Authors are asked to limit entries to about one column of text. A general introduction is helpful. Apart from that, you should focus on recent or upcoming developments. Pointers to online content can be given for more comprehensive or "historic" overviews of a project. Images do not count towards the length limit, so you may want to use this opportunity to pep up entries. There is no minimum length of an entry! The report aims for being as complete as possible, so please consider writing an entry, even if it is only a few lines long. Q: Which topics are relevant? A: All topics which are related to Haskell in some way are relevant. We usually had reports from users of Haskell (private, academic, or commercial), from authors or contributors to projects related to Haskell, from people working on the Haskell language, libraries, on language extensions or variants. We also like reports about distributions of Haskell software, Haskell infrastructure, books and tutorials on Haskell. Reports on past and upcoming events related to Haskell are also relevant. Finally, there might be new topics we do not even think about. As a rule of thumb: if in doubt, then it probably is relevant and has a place in the HCAR. You can also simply ask us. Q: Is unfinished work relevant? Are ideas for projects relevant? A: Yes! You can use the HCAR to talk about projects you are currently working on. You can use it to look for other developers that might help you. You can use HCAR to ask for more contributors to your project, it is a good way to gain visibility and traction. Q: If I do not update my entry, but want to keep it in the report, what should I do? A: Tell us that there are no changes. The old entry will typically be reused in this case, but it might be dropped if it is older than a year, to give more room and more attention to projects that change a lot. Do not resend complete entries if you have not changed them. Q: Will I get confirmation if I send an entry? How do I know whether my email has even reached its destination, and not ended up in a spam folder? A: Prior to publication of the final report, we will send a draft to all contributors, for possible corrections. So if you do not hear from us within two weeks after the deadline, it is safer to send another mail and check whether your first one was received. -- Mihai Maruseac (MM) "If you can't solve a problem, then there's an easier problem you can solve: find it." -- George Polya From drkoster at qq.com Mon Oct 10 03:39:33 2016 From: drkoster at qq.com (winter) Date: Mon, 10 Oct 2016 11:39:33 +0800 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: References: <1475403477332.93922@didichuxing.com> Message-ID: > The only reason for scanning ahead is in case you're dealing with lazy > bytestrings and so need to glue them together in order to use > bytestring-lexing. Older versions of the library did have support for > lazy bytestrings, but I removed it because it was bitrotten and > unused. But if you really need it, I can add new variants of the > lexers for dealing with the possibility of requesting new data when > the input runs out. Yes, please! the only reason i have to use `takeWhile isDigit` myself is that `takeWhile` will take care partial input for me, but if you can provide a version which is easy to deal incremental input, then i should rely on bytestring-lexing completely. You may be interested in `scanChunks` combinator in binary-parsers. Let’s work something out, if you need any help please tell me, thanks! cheers!~ winter > On Oct 9, 2016, at 13:56, wren romano wrote: > > On Sun, Oct 2, 2016 at 3:17 AM, 韩冬(基础平台部) wrote: >> Hi wren! >> >> Yes, i noticed that attoparsec's numeric parsers are slow. I have a benchmark set to compare attoparsec and binary-parsers on different sample JSON files, it's on github: https://github.com/winterland1989/binary-parsers. >> >> I'm pretty sure bytestring-lexing helped a lot, for example, the average decoding speed improvement is around 20%, but numeric only benchmarks(integers and numbers) improved by 30% ! > > So still some substantial gains for non-numeric stuff, nice! > >> Parsing is just a part of JSON decoding, lots of time is spent on unescaping, .etc. So the parser's improvement is quite large IMHO. >> >> BTW, can you provide a version of lexer which doesn't check whether a Word is a digit? In binary-parsers i use something like `takeWhile isDigit` to extract the input ByteString, so there's no need to verify this in lexer again. Maybe we can have another performance improvement. > > I suppose I could, but then it wouldn't be guaranteed to return > correct answers. The way things are set up now, the intended workflow > is that wherever you're expecting a number, you should just hand the > ByteString over to bytestring-lexing (i.e., not bother > scanning/pre-lexing via `takeWhile isDigit`) and it'll give back the > answer together with the remainder of the input. This ensures that you > don't need to do two passes over the characters. So, for Attoparsec > itself you'd wrap it up with something like: > > decimal :: Integral a => Parser a > decimal = > get >>= \bs -> > case readDecimal bs of > Nothing -> fail "error message" > Just (a, bs') -> put bs' >> return a > > Alas `get` isn't exported[1], but you get the idea. Of course, for > absolute performance you may want to inline all the combinators to see > if there's stuff you can get rid of. > > The only reason for scanning ahead is in case you're dealing with lazy > bytestrings and so need to glue them together in order to use > bytestring-lexing. Older versions of the library did have support for > lazy bytestrings, but I removed it because it was bitrotten and > unused. But if you really need it, I can add new variants of the > lexers for dealing with the possibility of requesting new data when > the input runs out. > > > [1] > > -- > Live well, > ~wren > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From Graham.Hutton at nottingham.ac.uk Mon Oct 10 14:14:43 2016 From: Graham.Hutton at nottingham.ac.uk (Graham Hutton) Date: Mon, 10 Oct 2016 14:14:43 +0000 Subject: [Haskell-cafe] Journal of Functional Programming - Call for PhD Abstracts Message-ID: <2DCC64CF-8188-4595-8745-5F1DD73DDE4B@nottingham.ac.uk> If you or one of your students recently completed a PhD in the area of functional programming, please submit the dissertation abstract for publication in JFP: simple process, no refereeing, deadline 31st October 2016. Many thanks, Graham ============================================================ CALL FOR PHD ABSTRACTS Journal of Functional Programming Deadline: 31st October 2016 http://tinyurl.com/jfp-phd-abstracts ============================================================ PREAMBLE: Many students complete PhDs in functional programming each year. As a service to the community, the Journal of Functional Programming publishes the abstracts from PhD dissertations completed during the previous year. The abstracts are made freely available on the JFP website, i.e. not behind any paywall. They do not require any transfer of copyright, merely a license from the author. A dissertation is eligible for inclusion if parts of it have or could have appeared in JFP, that is, if it is in the general area of functional programming. The abstracts are not reviewed. Please submit dissertation abstracts according to the instructions below. We welcome submissions from both the PhD student and PhD advisor/supervisor although we encourage them to coordinate. ============================================================ SUBMISSION: Please submit the following information to Graham Hutton by 31st October 2016. o Dissertation title: (including any subtitle) o Student: (full name) o Awarding institution: (full name and country) o Date of PhD award: (month and year; depending on the institution, this may be the date of the viva, corrections being approved, graduation ceremony, or otherwise) o Advisor/supervisor: (full names) o Dissertation URL: (please provide a permanently accessible link to the dissertation if you have one, such as to an institutional repository or other public archive; links to personal web pages should be considered a last resort) o Dissertation abstract: (plain text, maximum 1000 words; you may use \emph{...} for emphasis, but we prefer no other markup or formatting in the abstract, but do get in touch if this causes significant problems) Please do not submit a copy of the dissertation itself, as this is not required. JFP reserves the right to decline to publish abstracts that are not deemed appropriate. ============================================================ PHD ABSTRACT EDITOR: Graham Hutton School of Computer Science University of Nottingham Nottingham NG8 1BB United Kingdom ============================================================ This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From drkoster at qq.com Tue Oct 11 10:14:32 2016 From: drkoster at qq.com (winter) Date: Tue, 11 Oct 2016 18:14:32 +0800 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: References: <1475403477332.93922@didichuxing.com> Message-ID: Hi, wren BTW, I think it’s a good idea to host your code on github which is easier to send patch .etc, can you mirror your bytestring-lexing repo to github? happy hacking! winter > On Oct 10, 2016, at 11:39, winter wrote: > > > > > > >> On Oct 9, 2016, at 13:56, wren romano wrote: >> >> On Sun, Oct 2, 2016 at 3:17 AM, 韩冬(基础平台部) wrote: >>> Hi wren! >>> >>> Yes, i noticed that attoparsec's numeric parsers are slow. I have a benchmark set to compare attoparsec and binary-parsers on different sample JSON files, it's on github: https://github.com/winterland1989/binary-parsers. >>> >>> I'm pretty sure bytestring-lexing helped a lot, for example, the average decoding speed improvement is around 20%, but numeric only benchmarks(integers and numbers) improved by 30% ! >> >> So still some substantial gains for non-numeric stuff, nice! >> >>> Parsing is just a part of JSON decoding, lots of time is spent on unescaping, .etc. So the parser's improvement is quite large IMHO. >>> >>> BTW, can you provide a version of lexer which doesn't check whether a Word is a digit? In binary-parsers i use something like `takeWhile isDigit` to extract the input ByteString, so there's no need to verify this in lexer again. Maybe we can have another performance improvement. >> >> I suppose I could, but then it wouldn't be guaranteed to return >> correct answers. The way things are set up now, the intended workflow >> is that wherever you're expecting a number, you should just hand the >> ByteString over to bytestring-lexing (i.e., not bother >> scanning/pre-lexing via `takeWhile isDigit`) and it'll give back the >> answer together with the remainder of the input. This ensures that you >> don't need to do two passes over the characters. So, for Attoparsec >> itself you'd wrap it up with something like: >> >> decimal :: Integral a => Parser a >> decimal = >> get >>= \bs -> >> case readDecimal bs of >> Nothing -> fail "error message" >> Just (a, bs') -> put bs' >> return a >> >> Alas `get` isn't exported[1], but you get the idea. Of course, for >> absolute performance you may want to inline all the combinators to see >> if there's stuff you can get rid of. >> >> The only reason for scanning ahead is in case you're dealing with lazy >> bytestrings and so need to glue them together in order to use >> bytestring-lexing. Older versions of the library did have support for >> lazy bytestrings, but I removed it because it was bitrotten and >> unused. But if you really need it, I can add new variants of the >> lexers for dealing with the possibility of requesting new data when >> the input runs out. >> >> >> [1] >> >> -- >> Live well, >> ~wren >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Oct 11 21:38:09 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 11 Oct 2016 17:38:09 -0400 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: References: Message-ID: <87y41u7eim.fsf@ben-laptop.smart-cactus.org> "韩冬(基础平台部)" writes: > Hi all, > > I am happy to announce binary-parsers. A ByteString parsing library > built on binary. I borrowed lots of design/tests/document from > attoparsec so that i can build its shape very quickly, thank you bos! > And thanks to binary's excellent design, the codebase is very > small(<500 loc). > What in particular changed to produce these performance improvements? I have been maintaining attoparsec recently and would be happy to merge any semantics-preserving changes or new combinators that would help existing users benefit from your work. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From iavor.diatchki at gmail.com Wed Oct 12 00:22:41 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 11 Oct 2016 17:22:41 -0700 Subject: [Haskell-cafe] Galois is hiring! Message-ID: Hello, Galois is hiring again! We're looking for researchers, principal investigators, software engineers, and project leads, including those with expertise in functional programming, formal methods, machine learning, embedded systems, computer security, or networking. For the exact available positions, please have a look at our web-site: http://galois.com/careers We have two offices: one in Portland, OR, and one in Arlington, VA. There are positions available at both locations. Generally, we are looking for people to work on site, not remotely. Mostly, we are looking for candidates who are already allowed to work in the US, but in exceptional situations we can work with the candidate to obtain the necessary documentation. If you are interested, please send us your resume through the web site. Cheers, -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From drkoster at qq.com Wed Oct 12 01:53:45 2016 From: drkoster at qq.com (winter) Date: Wed, 12 Oct 2016 09:53:45 +0800 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: <87y41u7eim.fsf@ben-laptop.smart-cactus.org> References: <87y41u7eim.fsf@ben-laptop.smart-cactus.org> Message-ID: <60424E12-EBE1-4685-BE6D-8163A68F3435@qq.com> Hi Ben! I'm not familiar enough with attoparsec's internal to give you a concrete answer, since the core parser type of binary and attoparsec is so different. But I guess it may have something to do with ghc specializer, because attoparsec try to parametrize the input type to support both Text and ByteString (which is a bad decision IMHO). Actually I think current binary's Decoder type can be improved further following attoparsec: the Pos state is encoded directly into CPS parser type, I'll try to see if this is an improvement or not. In an ideal world, I think we should have a fast parser for ByteString which support both binary's getWordXX and ascii textual content, and a fast parser specialized for Text, let me know what's your thoughts! Cheers! Winter 发自我的 iPhone > 在 2016年10月12日,上午5:38,Ben Gamari 写道: > > "韩冬(基础平台部)" writes: > >> Hi all, >> >> I am happy to announce binary-parsers. A ByteString parsing library >> built on binary. I borrowed lots of design/tests/document from >> attoparsec so that i can build its shape very quickly, thank you bos! >> And thanks to binary's excellent design, the codebase is very >> small(<500 loc). > What in particular changed to produce these performance improvements? > I have been maintaining attoparsec recently and would be happy to merge > any semantics-preserving changes or new combinators that would help > existing users benefit from your work. > > Cheers, > > - Ben > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From winterkoninkje at gmail.com Wed Oct 12 03:14:15 2016 From: winterkoninkje at gmail.com (wren romano) Date: Tue, 11 Oct 2016 20:14:15 -0700 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: References: <1475403477332.93922@didichuxing.com> Message-ID: On Tue, Oct 11, 2016 at 3:14 AM, winter wrote: > Hi, wren > > BTW, I think it’s a good idea to host your code on github which is easier to > send patch .etc, can you mirror your bytestring-lexing repo to github? Like most of my darcs repos, it's already mirrored to github: https://github.com/wrengr/bytestring-lexing -- Live well, ~wren From winterkoninkje at gmail.com Wed Oct 12 03:19:33 2016 From: winterkoninkje at gmail.com (wren romano) Date: Tue, 11 Oct 2016 20:19:33 -0700 Subject: [Haskell-cafe] Announcing binary-parsers In-Reply-To: <87y41u7eim.fsf@ben-laptop.smart-cactus.org> References: <87y41u7eim.fsf@ben-laptop.smart-cactus.org> Message-ID: On Tue, Oct 11, 2016 at 2:38 PM, Ben Gamari wrote: > What in particular changed to produce these performance improvements? > I have been maintaining attoparsec recently and would be happy to merge > any semantics-preserving changes or new combinators that would help > existing users benefit from your work. I can't speak to Winter's work, but for my part: One of the big things is bytestring-lexing. I'd mentioned to Bryan about using it before, but that never went anywhere. I also have a handful of other minor patches which help to optimize a few combinators here and there. I'd sent a bunch of these to Bryan back when I did them; some got merged but I think some also got lost in the shuffle. I can resend them if you'd like. -- Live well, ~wren From sgraf1337 at gmail.com Wed Oct 12 07:58:11 2016 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Wed, 12 Oct 2016 00:58:11 -0700 (PDT) Subject: [Haskell-cafe] Help wanted for the architecture backing perf-service.haskell.org Message-ID: Hi all, I'm about to wrap up my work for the Haskell committee (think half a summer of code): feed-gipeda It's basically a daemon which will spawn benchmarking jobs for every commit of registered repositories. Much like Travis CI, but for benchmarks. You can see a simple web server, hosting the gipeda -generated sites, at http://perf-service.haskell.org/ghc/#all. While the Haskell part is working smoothly enough for now, I'd really like some help setting up proper sandboxing environments for the benchmark slaves, in such a way that security isn't as much a concern as it currently is. We can go over the details on a less publicly shared medium, but I doubt the current solution (invoking shell scripts from a non-root user) is safe. So, some concrete points I need help with: 1. Administrative expertise: Which part of the architecture runs has which rights, setting up proper sandboxing environments for benchmark slaves 2. Ops stuff: Creating master and slave containers for a low barrier to entry and reproducible environments 3. Distributed protocols: Someone with experience in stuff like SSH-tunneling/CloudHaskell/other useful things I should make the communication protocol of feed-gipeda aware of 4. Some Haskellers which want to take a look at my code and contribute criticism or even code to it :) Thanks in advance! So long, Sebastian Graf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Wed Oct 12 16:05:31 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 12 Oct 2016 12:05:31 -0400 Subject: [Haskell-cafe] Help wanted for the architecture backing perf-service.haskell.org In-Reply-To: References: Message-ID: <1476288331.1899.2.camel@joachim-breitner.de> Hi, > I'd really like some help setting up proper sandboxing environments > for the benchmark slaves, in such a way that security isn't as much a > concern as it currently is. let me add that I very much think this is going to be a great service to our community and ecosystem. So if this is something that is of interest to you, and you believe you can contribute here, please do stand up and join forces with Sebastian (who is a nice guy). This is also an opportunity to get more involved in the Haskell community even if you do not consider yourself a Haskell guru! Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From haskell at ibotty.net Wed Oct 12 19:22:12 2016 From: haskell at ibotty.net (Tobias Florek) Date: Wed, 12 Oct 2016 21:22:12 +0200 Subject: [Haskell-cafe] Help wanted for the architecture backing perf-service.haskell.org In-Reply-To: References: Message-ID: <48a7fe9a-55b3-48b0-2568-34a203be0cca@ibotty.net> Hi, I am willing to help. Ping me on #haskell-infrastructure, I am ibotty. I would very much like to help with 1, 2, maybe also 3. Cheers, Tobi(as Florek) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 839 bytes Desc: OpenPGP digital signature URL: From zocca.marco at gmail.com Fri Oct 14 07:46:33 2016 From: zocca.marco at gmail.com (Marco Zocca) Date: Fri, 14 Oct 2016 09:46:33 +0200 Subject: [Haskell-cafe] Req. transfer of maintainership of network-multicast Message-ID: To the Hackage trustees, following a communication with Audrey Tang, I would like to become new maintainer of `network-multicast` . My Hackage account name is `ocramz`. Thank you, Marco From m at jaspervdj.be Fri Oct 14 09:33:44 2016 From: m at jaspervdj.be (Jasper Van der Jeugt) Date: Fri, 14 Oct 2016 11:33:44 +0200 Subject: [Haskell-cafe] [ANN] patat - Terminal based presentation tool built with Pandoc Message-ID: I'm happy to announce `patat` (Presentations Atop The ANSI Terminal), a small tool that allows you to show presentations using only an ANSI terminal. The main features are: - Leverages the great Pandoc library to support many input formats including Literate Haskell. - Supports smart slide splitting. - There is a live reload mode. - Theming support. - Optionally re-wrapping text to terminal width with proper indentation. - Written in Haskell. You can find more information here: https://github.com/jaspervdj/patat/blob/master/README.md Peace, Jasper From lukewm at riseup.net Fri Oct 14 09:37:41 2016 From: lukewm at riseup.net (Luke Murphy) Date: Fri, 14 Oct 2016 11:37:41 +0200 Subject: [Haskell-cafe] [ANN] patat - Terminal based presentation tool built with Pandoc In-Reply-To: References: Message-ID: Jasper, it's great. I am just writing a presentation with it now and saw this drop into my inbox. Nice job! Luke On 14.10.2016 11:33, Jasper Van der Jeugt wrote: > I'm happy to announce `patat` (Presentations Atop The ANSI Terminal), > a small tool that allows you to show presentations using only an ANSI > terminal. > > The main features are: > > - Leverages the great Pandoc library to support many input formats > including Literate Haskell. > - Supports smart slide splitting. > - There is a live reload mode. > - Theming support. > - Optionally re-wrapping text to terminal width with proper indentation. > - Written in Haskell. > > You can find more information here: > > https://github.com/jaspervdj/patat/blob/master/README.md > > Peace, > Jasper > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From migmit at gmail.com Fri Oct 14 09:41:24 2016 From: migmit at gmail.com (Miguel) Date: Fri, 14 Oct 2016 11:41:24 +0200 Subject: [Haskell-cafe] [ANN] patat - Terminal based presentation tool built with Pandoc In-Reply-To: References: Message-ID: Wow. Just wow. On Fri, Oct 14, 2016 at 11:37 AM, Luke Murphy wrote: > Jasper, it's great. > > I am just writing a presentation with it now and saw this drop into my > inbox. > > Nice job! > > Luke > > > > On 14.10.2016 11:33, Jasper Van der Jeugt wrote: > >> I'm happy to announce `patat` (Presentations Atop The ANSI Terminal), >> a small tool that allows you to show presentations using only an ANSI >> terminal. >> >> The main features are: >> >> - Leverages the great Pandoc library to support many input formats >> including Literate Haskell. >> - Supports smart slide splitting. >> - There is a live reload mode. >> - Theming support. >> - Optionally re-wrapping text to terminal width with proper indentation. >> - Written in Haskell. >> >> You can find more information here: >> >> https://github.com/jaspervdj/patat/blob/master/README.md >> >> Peace, >> Jasper >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. >> > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillaum.bouchard+haskell at gmail.com Fri Oct 14 12:00:15 2016 From: guillaum.bouchard+haskell at gmail.com (Guillaume Bouchard) Date: Fri, 14 Oct 2016 14:00:15 +0200 Subject: [Haskell-cafe] [ANN] patat - Terminal based presentation tool built with Pandoc In-Reply-To: References: Message-ID: Nice work, The documentation state about "image" and "math" in the "style" section. Is there a way to display math and images in this tool ? On Fri, Oct 14, 2016 at 11:41 AM, Miguel wrote: > Wow. Just wow. > > On Fri, Oct 14, 2016 at 11:37 AM, Luke Murphy wrote: >> >> Jasper, it's great. >> >> I am just writing a presentation with it now and saw this drop into my >> inbox. >> >> Nice job! >> >> Luke >> >> >> >> On 14.10.2016 11:33, Jasper Van der Jeugt wrote: >>> >>> I'm happy to announce `patat` (Presentations Atop The ANSI Terminal), >>> a small tool that allows you to show presentations using only an ANSI >>> terminal. >>> >>> The main features are: >>> >>> - Leverages the great Pandoc library to support many input formats >>> including Literate Haskell. >>> - Supports smart slide splitting. >>> - There is a live reload mode. >>> - Theming support. >>> - Optionally re-wrapping text to terminal width with proper indentation. >>> - Written in Haskell. >>> >>> You can find more information here: >>> >>> https://github.com/jaspervdj/patat/blob/master/README.md >>> >>> Peace, >>> Jasper >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From m at jaspervdj.be Fri Oct 14 12:23:56 2016 From: m at jaspervdj.be (Jasper Van der Jeugt) Date: Fri, 14 Oct 2016 13:23:56 +0100 Subject: [Haskell-cafe] [ANN] patat - Terminal based presentation tool built with Pandoc In-Reply-To: References: Message-ID: Not really. Math is displayed as the math's source, and for images it displays the URL. The settings impact only the style in which these strings are printed. There is an idea floating around to support images in iTerm2 [1], or we could support low-res ones in 256-color terminals, but it's still debatable if this is a good idea. [1]: https://github.com/jaspervdj/patat/issues/6 Peace, Jasper On Fri, Oct 14, 2016 at 1:00 PM, Guillaume Bouchard wrote: > Nice work, > > The documentation state about "image" and "math" in the "style" > section. Is there a way to display math and images in this tool ? > > On Fri, Oct 14, 2016 at 11:41 AM, Miguel wrote: >> Wow. Just wow. >> >> On Fri, Oct 14, 2016 at 11:37 AM, Luke Murphy wrote: >>> >>> Jasper, it's great. >>> >>> I am just writing a presentation with it now and saw this drop into my >>> inbox. >>> >>> Nice job! >>> >>> Luke >>> >>> >>> >>> On 14.10.2016 11:33, Jasper Van der Jeugt wrote: >>>> >>>> I'm happy to announce `patat` (Presentations Atop The ANSI Terminal), >>>> a small tool that allows you to show presentations using only an ANSI >>>> terminal. >>>> >>>> The main features are: >>>> >>>> - Leverages the great Pandoc library to support many input formats >>>> including Literate Haskell. >>>> - Supports smart slide splitting. >>>> - There is a live reload mode. >>>> - Theming support. >>>> - Optionally re-wrapping text to terminal width with proper indentation. >>>> - Written in Haskell. >>>> >>>> You can find more information here: >>>> >>>> https://github.com/jaspervdj/patat/blob/master/README.md >>>> >>>> Peace, >>>> Jasper >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> To (un)subscribe, modify options or view archives go to: >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> Only members subscribed via the mailman list are allowed to post. >>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. >> >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From capn.freako at gmail.com Fri Oct 14 12:50:18 2016 From: capn.freako at gmail.com (David Banas) Date: Fri, 14 Oct 2016 05:50:18 -0700 Subject: [Haskell-cafe] Arrows and Computation exercises? Message-ID: Hi all, I’m wondering if anyone else happens to be working their way through the exercises in Ross Paterson’s *Arrows and Computation*, and would like to compare notes and/or discuss solutions to his exercises. Here are mine, so far: https://htmlpreview.github.io/?https://github.com/capn-freako/Haskell_Misc/blob/master/Arrows_and_Computation/Arrows_and_Computation.html Thanks, -db -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Fri Oct 14 15:08:14 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 14 Oct 2016 11:08:14 -0400 Subject: [Haskell-cafe] [ANN] patat - Terminal based presentation tool built with Pandoc In-Reply-To: References: Message-ID: On Fri, Oct 14, 2016 at 8:23 AM, Jasper Van der Jeugt wrote: > > There is an idea floating around to support images in iTerm2 [1], or > we could support low-res ones in 256-color terminals, but it's still > debatable if this is a good idea. You can render images directly into any X11 terminal which supports $WINDOWID, which is most of them. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From damian.nadales at gmail.com Fri Oct 14 15:35:49 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Fri, 14 Oct 2016 17:35:49 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences Message-ID: Hi, I was looking into free monads for designing a DSL for describing scenarios of the form: scenario = do aId <- createA b0Id <- createB id b1Id <- createB id link b0 b1 In our company we use a graph database, and currently we're setting up the test data using raw queries O.O So I wanted to come up with a better abstraction, and also enable us to do property based testing (by testing on random scenarios like the one above). Anyway, I ran into this presentation: http://www.slideshare.net/jdegoes/mtl-versus-free http://degoes.net/articles/modern-fp-part-2 In which monad transformers and free monads are compared. Do you have any experience using any of these approaches. If so would you mind sharing? ;) Thanks! Damian From parsonsmatt at gmail.com Fri Oct 14 16:00:02 2016 From: parsonsmatt at gmail.com (Matt) Date: Fri, 14 Oct 2016 12:00:02 -0400 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: Message-ID: The mtl technique subsumes the free monad technique. if you have a term: getCurrentTime :: MonadClock m => m UTCTime Then you can *use* that function as anything that satisfies the constraint. Given an `IO` instance, that can just get the current time: `getCurrentTime :: IO UTCTime`. Given a mock instance, that can be `getCurrentTime :: MockClock UTCTime`. Given an instance on Free, you'd have `getCurrentTime :: Free CurrentTimeF UTCTime` I generally find it more pleasant to write functions in mtl style. If you're after more concrete guarantees on the DSL you're building and see yourself doing a lot of introspection and optimization, then a Free monad approach fits the bill. Matt Parsons On Fri, Oct 14, 2016 at 11:35 AM, Damian Nadales wrote: > Hi, > > I was looking into free monads for designing a DSL for describing > scenarios of the form: > scenario = do > aId <- createA > b0Id <- createB id > b1Id <- createB id > link b0 b1 > > In our company we use a graph database, and currently we're setting up > the test data using raw queries O.O So I wanted to come up with a > better abstraction, and also enable us to do property based testing > (by testing on random scenarios like the one above). > > Anyway, I ran into this presentation: > http://www.slideshare.net/jdegoes/mtl-versus-free > http://degoes.net/articles/modern-fp-part-2 > > In which monad transformers and free monads are compared. Do you have > any experience using any of these approaches. If so would you mind > sharing? ;) > > Thanks! > Damian > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Fri Oct 14 17:25:40 2016 From: sylvain at haskus.fr (Sylvain Henry) Date: Fri, 14 Oct 2016 19:25:40 +0200 Subject: [Haskell-cafe] Type alias in constraints Message-ID: Hi, I have been using constraints of the form: xxx :: forall x xs ys zs m. ( Monad m , zs ~ Union (Filter x xs) ys , Catchable x xs , Liftable (Filter x xs) zs , Liftable ys zs ) => Variant xs -> (x -> Flow m ys) -> Flow m zs Where "zs" is used as a type alias. Now with GHC8 and -Wredundant-constraints, GHC complaints that "zs" is redundant (indeed it is). Is there a way to do this properly? If not, could we introduce a new syntax to make this kind of local declaration? We could borrow the syntax of local declarations into list-comprehensions: "let zs ~ ..." Thanks, Sylvain From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Oct 14 17:31:36 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 14 Oct 2016 18:31:36 +0100 Subject: [Haskell-cafe] Type alias in constraints In-Reply-To: References: Message-ID: <20161014173136.GX5763@weber> On Fri, Oct 14, 2016 at 07:25:40PM +0200, Sylvain Henry wrote: > I have been using constraints of the form: > > xxx :: forall x xs ys zs m. > ( Monad m > , zs ~ Union (Filter x xs) ys > , Catchable x xs > , Liftable (Filter x xs) zs > , Liftable ys zs > ) => Variant xs -> (x -> Flow m ys) -> Flow m zs > > Where "zs" is used as a type alias. Now with GHC8 and > -Wredundant-constraints, GHC complaints that "zs" is redundant > (indeed it is). Why is it redundant? Surely a redundant constraint is one you can freely remove. I don't see any constraint you can remove in that type. From sylvain at haskus.fr Fri Oct 14 18:25:35 2016 From: sylvain at haskus.fr (Sylvain Henry) Date: Fri, 14 Oct 2016 20:25:35 +0200 Subject: [Haskell-cafe] Type alias in constraints In-Reply-To: <20161014173136.GX5763@weber> References: <20161014173136.GX5763@weber> Message-ID: <26cc8fb2-ad0b-6ee7-495e-40842a0e04d9@haskus.fr> On 14/10/2016 19:31, Tom Ellis wrote: > On Fri, Oct 14, 2016 at 07:25:40PM +0200, Sylvain Henry wrote: >> I have been using constraints of the form: >> >> xxx :: forall x xs ys zs m. >> ( Monad m >> , zs ~ Union (Filter x xs) ys >> , Catchable x xs >> , Liftable (Filter x xs) zs >> , Liftable ys zs >> ) => Variant xs -> (x -> Flow m ys) -> Flow m zs >> >> Where "zs" is used as a type alias. Now with GHC8 and >> -Wredundant-constraints, GHC complaints that "zs" is redundant >> (indeed it is). > Why is it redundant? Surely a redundant constraint is one you can freely > remove. I don't see any constraint you can remove in that type. I should have checked on Trac, it has been discussed... today: https://ghc.haskell.org/trac/ghc/ticket/12700 Also related: https://ghc.haskell.org/trac/ghc/ticket/12702 https://ghc.haskell.org/trac/ghc/ticket/11474 I will comment there. Sylvain From djohnson.m at gmail.com Fri Oct 14 22:11:56 2016 From: djohnson.m at gmail.com (David Johnson) Date: Fri, 14 Oct 2016 17:11:56 -0500 Subject: [Haskell-cafe] [ANN] HackerNews 1.0 Message-ID: Released new API bindings to the HackerNews API ( https://github.com/HackerNews/API) *Changes/Updates:* - Matching GHCJS implementation, all AJAX calls are tested thanks to phantomjs w/ hspec. - All aeson instances derived generically - Nix-based CI w/ travis ensures successful building on the following platforms: - *nixpkgs-16.09* - ghcjs-7103 - ghc-801 - *Stackage LTS * - LTS 7.3 *Release links*: - http://hackage.haskell.org/package/hackernews - https://github.com/dmjio/hackernews More platforms to be added for testing. - David -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg at okmij.org Sat Oct 15 10:30:00 2016 From: oleg at okmij.org (Oleg) Date: Sat, 15 Oct 2016 19:30:00 +0900 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences Message-ID: <20161015103000.GA1499@Magus.localnet> Matt wrote: > The mtl technique subsumes the free monad technique. if you have a term: > getCurrentTime :: MonadClock m => m UTCTime The relationship between MTL and Free(r) monad approaches is filled with confusion, as you message has just demonstrated. There is nothing specific to MTL in writing the constraints like MonadClock. There is absolutely nothing that prevents you from defining instance Member ClockEffect r => MonadClock (Eff r) or similar with Free monads, instance MonadClock (Free CurrentTimeF) (which are less efficient, compose less well and require the boilerplate of writing functor instances). In short, defining constraints like MonadClock, MonadState etc. is not specific to any approach to effects. The difference between the monad transformer and Free monad is how you build types that satisfy the MonadClock etc. constraint. In MTL, you build by applying monad transformer to a suitable base monad. In Free monad, you define functors like CurrentTimeF and then build the Free monad. In Freer monad approach, your example getCurrentTime will look like getCurrentTime :: Member ClockEffect r => Eff r UTCTime which looks quite like getCurrentTime :: MonadClock m => m UTCTime in your example. Therefore, there is usually no need to define a separate MonadClock class. But nothing stops you from doing that. From mail at joachim-breitner.de Sat Oct 15 13:49:42 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 15 Oct 2016 09:49:42 -0400 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: Message-ID: <1476539382.1073.2.camel@joachim-breitner.de> Hi, Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: > Do you have > any experience using any of these approaches. If so would you mind > sharing? ;) I don’t have an answer to contribute, but I would be very interested in hearing about experiences in terms of their relative runtime performance. My gut feeling is that an an indirect function call for every (>>=), with many calls to `lift` each time, would make a deep monad transformer stack much more expensive. A free monad approach seems to be more sensible to me. But maybe GHC is doing a better job optimizing this than I would think? So if you have any number-supported evidence about this, possibly from a real-world application where you tried to use one or the other, please share it with us! Thanks, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From johnw at newartisans.com Sun Oct 16 00:28:04 2016 From: johnw at newartisans.com (John Wiegley) Date: Sat, 15 Oct 2016 17:28:04 -0700 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: <20161015103000.GA1499@Magus.localnet> (oleg@okmij.org's message of "Sat, 15 Oct 2016 19:30:00 +0900") References: <20161015103000.GA1499@Magus.localnet> Message-ID: >>>>> "O" == Oleg writes: O> and require the boilerplate of writing functor instances Just note, since DeriveFunctor there is almost never any such boilerplate in these cases. -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From apfelmus at quantentunnel.de Sun Oct 16 08:22:37 2016 From: apfelmus at quantentunnel.de (Heinrich Apfelmus) Date: Sun, 16 Oct 2016 10:22:37 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: <1476539382.1073.2.camel@joachim-breitner.de> References: <1476539382.1073.2.camel@joachim-breitner.de> Message-ID: > My gut feeling is that an an indirect function call for every (>>=), > with many calls to `lift` each time, would make a deep monad > transformer stack much more expensive. A free monad approach seems to > be more sensible to me. But maybe GHC is doing a better job optimizing > this than I would think? Well, my gut feeling would be that free monads are more expensive than a monad transformer stack with <= 3 layers. After all, for free/operational monads, the compiler has to allocate a closure for every second argument of `>>=` as part of the monad data structure. There are less opportunities for inlining, because the interpretation of the monad is not fixed, and only decided late at runtime. That said, the above arguments are no proof. I would be interested in performance measurements as well. Maybe there is a way to generalize the "GHC state hack" to a "free monad hack"? The basic ansatz would be the ability to mark some closures as "Only entered once, may duplicate work". Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com Joachim Breitner wrote: > Hi, > > Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: >> Do you have >> any experience using any of these approaches. If so would you mind >> sharing? ;) > > I don’t have an answer to contribute, but I would be very interested in > hearing about experiences in terms of their relative runtime > performance. > > My gut feeling is that an an indirect function call for every (>>=), > with many calls to `lift` each time, would make a deep monad > transformer stack much more expensive. A free monad approach seems to > be more sensible to me. But maybe GHC is doing a better job optimizing > this than I would think? > > So if you have any number-supported evidence about this, possibly from > a real-world application where you tried to use one or the other, > please share it with us! > > Thanks, > Joachim > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From agocorona at gmail.com Sun Oct 16 08:48:11 2016 From: agocorona at gmail.com (Alberto G. Corona ) Date: Sun, 16 Oct 2016 10:48:11 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <20161015103000.GA1499@Magus.localnet> Message-ID: There may be another third way to encode effects: Maybe it is possible to demonstrate that every effect may be a combination of state (data) and continuations (processing). I don´t know if this is true, but it is very likely. If a monad can handle user defined states (in a pure way, like the state monad) and continuations, then the programmer can implement any new effect by combining them. With continuation effect I mean that each monadic statement can inspect and make use of his own computation in which it is inserted (his closure) and his continuation. The effects are added by creating new primitives, instead of aggregating new monad transformers (mtl) or declaring new effects (the free monad). I implemented reactivity, backtracking, streaming and other high level effects besides readers, writers and other conventional effects using this approach, in the package transient. The advantage is Expressive power (high level effects), composability, simple type signatures, and extensibility by means of a single expression. It may be necessary to have more than one monad when we want to enforce certain effects that are performed when one monad is converted into to another, trough the type system 2016-10-16 2:28 GMT+02:00 John Wiegley : > >>>>> "O" == Oleg writes: > > O> and require the boilerplate of writing functor instances > > Just note, since DeriveFunctor there is almost never any such boilerplate > in > these cases. > > -- > John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F > http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -- Alberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun Oct 16 20:17:46 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 16 Oct 2016 16:17:46 -0400 Subject: [Haskell-cafe] Generalized state hack (was: MTL vs Free-monads) In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> Message-ID: <1476649066.1053.10.camel@joachim-breitner.de> Hi, Am Sonntag, den 16.10.2016, 10:22 +0200 schrieb Heinrich Apfelmus: > Maybe there is a way to generalize the "GHC state hack" to a "free monad  > hack"? The basic ansatz would be the ability to mark some closures as  > "Only entered once, may duplicate work". recent related discussion (in the context of pipes): https://ghc.haskell.org/trac/ghc/ticket/12620#comment:5 Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From lemming at henning-thielemann.de Sun Oct 16 21:38:57 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Oct 2016 23:38:57 +0200 (CEST) Subject: [Haskell-cafe] code.haskell.org down Message-ID: Currently, both HTTP and SSH to code.haskell.org fail with a timeout. :-( From gershomb at gmail.com Mon Oct 17 05:32:21 2016 From: gershomb at gmail.com (Gershom B) Date: Mon, 17 Oct 2016 01:32:21 -0400 Subject: [Haskell-cafe] code.haskell.org down In-Reply-To: References: Message-ID: On October 16, 2016 at 5:39:12 PM, Henning Thielemann (lemming at henning-thielemann.de) wrote: > > Currently, both HTTP and SSH to code.haskell.org fail with a timeout. :-( Thanks for the report! We’ve rebooted the hetzner box and everything should be back up. As a reminder, the best way to contact haskell-infra admins is #haskell-infrastructure on freenode, or email to admin at haskell.org. Also note that as per the blog post here: http://blog.haskell.org/post/7/the_future_of_community.haskell.org/ and discussion here: https://www.reddit.com/r/haskell/comments/2wwc42/the_future_of_communityhaskellorg_request_for/ A retirement of the community / code.haskell.org box is long overdue. We’re going to try to restart the retirement process for the box soon, and if you haven’t moved your site and projects off now, this is probably a good time to do so, as we will be gently then more insistently nudging people to migrate over the coming months. Best, Gershom From lemming at henning-thielemann.de Mon Oct 17 06:42:37 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Mon, 17 Oct 2016 08:42:37 +0200 (CEST) Subject: [Haskell-cafe] code.haskell.org down In-Reply-To: References: Message-ID: On Mon, 17 Oct 2016, Gershom B wrote: > On October 16, 2016 at 5:39:12 PM, Henning Thielemann (lemming at henning-thielemann.de) wrote: >> >> Currently, both HTTP and SSH to code.haskell.org fail with a timeout. :-( > > Thanks for the report! We’ve rebooted the hetzner box and everything should be back up. > As a reminder, the best way to contact haskell-infra admins is > #haskell-infrastructure on freenode, or email to admin at haskell.org. Ah, thank you. I was missing this information and I could not look it up at community.haskell.org while the server was down. > We’re going to try to restart the retirement process for the box soon, > and if you haven’t moved your site and projects off now, this is > probably a good time to do so, as we will be gently then more > insistently nudging people to migrate over the coming months. I moved my darcs-2 projects to hub.darcs.net but I have no good solution for the many darcs-1 projects and files that are not under version control. I also do not see an easy way to find all my files, because not all of them are in the home directory but in project directories. I also hope that the files remain available read-only on code.haskell.org after termination of the community service. From harendra.kumar at gmail.com Mon Oct 17 09:24:04 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 17 Oct 2016 14:54:04 +0530 Subject: [Haskell-cafe] [ANN] xls-0.1.0 Parse MS Excel spreadsheets Message-ID: I have uploaded the xls package [1] on Hackage [2]. It works pretty well for the basic use case of parsing all sheets in a single stream of rows composed of cells. The cell values are presented as plain strings i.e. no data type based interpretation. Such stuff can be added if required, it is supported by the underlying C library (libxls). One thing that I would like to have added to the API is a way to list all sheets and select sheets to parse in a workbook. It should be pretty easy to do if anyone wants to do it. 1. https://github.com/harendra-kumar/xls 2. https://hackage.haskell.org/package/xls -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Mon Oct 17 09:32:46 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Mon, 17 Oct 2016 11:32:46 +0200 (CEST) Subject: [Haskell-cafe] [Haskell] [ANN] xls-0.1.0 Parse MS Excel spreadsheets In-Reply-To: References: Message-ID: On Mon, 17 Oct 2016, Harendra Kumar wrote: > I have uploaded the xls package [1] on Hackage [2]. It works pretty well for the basic use case of parsing all > sheets in a single stream of rows composed of cells. The cell values are presented as plain strings i.e. no data > type based interpretation. Such stuff can be added if required, it is supported by the underlying C library > (libxls). One thing that I would like to have added to the API is a way to list all sheets and select sheets to > parse in a workbook. It should be pretty easy to do if anyone wants to do it. > 1. https://github.com/harendra-kumar/xls > 2. https://hackage.haskell.org/package/xls Btw. I recently found out that a pretty nice and simple format for interchange between Haskell, Excel and LibreOffice is the Excel 2003 XML format. It supports Unicode, hyperlinks, merged cells, font styles, formulas, reliable number formats (no hassle with decimal point vs. decimal comma) - neither CSV nor HTML supports all of these features. From harendra.kumar at gmail.com Mon Oct 17 09:56:56 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 17 Oct 2016 15:26:56 +0530 Subject: [Haskell-cafe] [Haskell] [ANN] xls-0.1.0 Parse MS Excel spreadsheets In-Reply-To: References: Message-ID: On 17 October 2016 at 15:02, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > > Btw. I recently found out that a pretty nice and simple format for > interchange between Haskell, Excel and LibreOffice is the Excel 2003 XML > format. It supports Unicode, hyperlinks, merged cells, font styles, > formulas, reliable number formats (no hassle with decimal point vs. decimal > comma) - neither CSV nor HTML supports all of these features. Do you directly interpret the XML or there is a higher level package to do so? Why not use Office Open XML, the 2007 format? One way to interpret the older formats like BIFF/Excel-97 could be to convert them to the newer ones and then parse it as the newer format. That way we will have to deal with only one format. Are there any such lightweight, command line based converters available? -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Mon Oct 17 10:04:08 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Mon, 17 Oct 2016 12:04:08 +0200 (CEST) Subject: [Haskell-cafe] [Haskell] [ANN] xls-0.1.0 Parse MS Excel spreadsheets In-Reply-To: References: Message-ID: On Mon, 17 Oct 2016, Harendra Kumar wrote: > On 17 October 2016 at 15:02, Henning Thielemann wrote: > > Btw. I recently found out that a pretty nice and simple format for interchange between Haskell, > Excel and LibreOffice is the Excel 2003 XML format. It supports Unicode, hyperlinks, merged cells, > font styles, formulas, reliable number formats (no hassle with decimal point vs. decimal comma) - > neither CSV nor HTML supports all of these features. > > > Do you directly interpret the XML or there is a higher level package to do so? So far I have only used it with custom code in a project and have only written it, not parsed. From mdorman at jaunder.io Mon Oct 17 13:23:06 2016 From: mdorman at jaunder.io (Michael Alan Dorman) Date: Mon, 17 Oct 2016 09:23:06 -0400 Subject: [Haskell-cafe] 1TB vsize for all haskell processes? Message-ID: <87wph76ret.fsf@jaunder.io> Hey, haskell-cafe, I realize that this should not be a problem, but it is very strange to me that *every* haskell process on my system has a 1TB VSIZE. If I sort my `ps auwx` output on vsize, the top of the list looks like: mdorman 1208 0.0 0.1 1074011536 12500 ? S Oct14 0:04 /home/mdorman/.xmonad/xmonad-x86_64-linux mdorman 21900 0.0 0.2 1074111408 19712 pts/1 Sl+ 09:06 0:00 cabal run melpa2nix -- --output melpa-generated.nix --melpa /home/mdorman/src/melpa --work /home/mdorman/src/emacs2nix/.workdir mdorman 1363 0.0 0.1 1074383784 13052 ? Sl Oct14 1:14 /home/mdorman/.cache/taffybar/taffybar-linux-x86_64 +RTS -I0 -V0 --RTS --dyre-master-binary=/nix/store/6skvglp84w5xzqx8dxxydazk1zj8h2ih-taffybar-0.4.6/bin/taffybar mdorman 21981 8.6 0.5 1076223012 42308 pts/1 Sl+ 09:06 0:19 /home/mdorman/src/emacs2nix/dist/build/melpa2nix/melpa2nix --output melpa-generated.nix --melpa /home/mdorman/src/melpa --work /home/mdorman/src/emacs2nix/.workdir Each of those haskell processes has a vsize 5 orders of magnitude more than their resident set size, whether it's the two I always have running (xmonad and taffybar), or in this case, cabal itself, as well as the compiled executable it's running. I've spent a reasonable amount of time attempting to google this, to no avail. Is this perhaps a peculiarity of how GHC was built? I'm running on nixos-unstable, using ghc-8.0.1---is there some compilation that we should or shouldn't be setting? Thanks for any guidance, Mike. From mike at barrucadu.co.uk Mon Oct 17 13:32:08 2016 From: mike at barrucadu.co.uk (Michael Walker) Date: Mon, 17 Oct 2016 14:32:08 +0100 Subject: [Haskell-cafe] 1TB vsize for all haskell processes? In-Reply-To: <87wph76ret.fsf@jaunder.io> References: <87wph76ret.fsf@jaunder.io> Message-ID: This was introduced here: https://ghc.haskell.org/trac/ghc/ticket/9706 Basically, if you can have a huge virtual address space (which you can on a 64bit architecture) then you can simplify the memory management logic. On 17 October 2016 at 14:23, Michael Alan Dorman wrote: > Hey, haskell-cafe, > > I realize that this should not be a problem, but it is very strange to > me that *every* haskell process on my system has a 1TB VSIZE. If I sort > my `ps auwx` output on vsize, the top of the list looks like: > > mdorman 1208 0.0 0.1 1074011536 12500 ? S Oct14 0:04 /home/mdorman/.xmonad/xmonad-x86_64-linux > mdorman 21900 0.0 0.2 1074111408 19712 pts/1 Sl+ 09:06 0:00 cabal run melpa2nix -- --output melpa-generated.nix --melpa /home/mdorman/src/melpa --work /home/mdorman/src/emacs2nix/.workdir > mdorman 1363 0.0 0.1 1074383784 13052 ? Sl Oct14 1:14 /home/mdorman/.cache/taffybar/taffybar-linux-x86_64 +RTS -I0 -V0 --RTS --dyre-master-binary=/nix/store/6skvglp84w5xzqx8dxxydazk1zj8h2ih-taffybar-0.4.6/bin/taffybar > mdorman 21981 8.6 0.5 1076223012 42308 pts/1 Sl+ 09:06 0:19 /home/mdorman/src/emacs2nix/dist/build/melpa2nix/melpa2nix --output melpa-generated.nix --melpa /home/mdorman/src/melpa --work /home/mdorman/src/emacs2nix/.workdir > > Each of those haskell processes has a vsize 5 orders of magnitude more > than their resident set size, whether it's the two I always have running > (xmonad and taffybar), or in this case, cabal itself, as well as the > compiled executable it's running. > > I've spent a reasonable amount of time attempting to google this, to no > avail. Is this perhaps a peculiarity of how GHC was built? I'm running > on nixos-unstable, using ghc-8.0.1---is there some compilation that we > should or shouldn't be setting? > > Thanks for any guidance, > > Mike. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Michael Walker (http://www.barrucadu.co.uk) From damian.nadales at gmail.com Mon Oct 17 15:32:39 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Mon, 17 Oct 2016 17:32:39 +0200 Subject: [Haskell-cafe] Arrows and Computation exercises? In-Reply-To: References: Message-ID: Wow, that looks like an interesting chapter! I have a lot in my tasks list now, but I added that chapter with a high priority. Right now I'm doing some study of free monads so this chapter seems to be relevant as well (in the topic of functional-programming architectures, or life beyond map and folds). I'll let you know if I start the exercises. On Fri, Oct 14, 2016 at 2:50 PM, David Banas wrote: > Hi all, > > I’m wondering if anyone else happens to be working their way through the > exercises in Ross Paterson’s *Arrows and Computation*, and would like to > compare notes and/or discuss solutions to his exercises. Here are mine, so > far: > > https://htmlpreview.github.io/?https://github.com/capn-freako/Haskell_Misc/blob/master/Arrows_and_Computation/Arrows_and_Computation.html > > Thanks, > -db > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From damian.nadales at gmail.com Mon Oct 17 15:39:01 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Mon, 17 Oct 2016 17:39:01 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: Message-ID: On Fri, Oct 14, 2016 at 6:00 PM, Matt wrote: > The mtl technique subsumes the free monad technique. if you have a term: > > getCurrentTime :: MonadClock m => m UTCTime > > Then you can *use* that function as anything that satisfies the constraint. > Given an `IO` instance, that can just get the current time: `getCurrentTime > :: IO UTCTime`. Given a mock instance, that can be `getCurrentTime :: > MockClock UTCTime`. Given an instance on Free, you'd have `getCurrentTime :: > Free CurrentTimeF UTCTime` > Thanks Matt. I think that was a nice explanation. Right now I'm focusing on the composition and natural transformations of free-monads, but I still haven't checked the MTL approach. > I generally find it more pleasant to write functions in mtl style. If you're > after more concrete guarantees on the DSL you're building and see yourself > doing a lot of introspection and optimization, then a Free monad approach > fits the bill. > I definitely like monad transformers. But I guess I'd have to explain the specific case in another thread. > Matt Parsons > > On Fri, Oct 14, 2016 at 11:35 AM, Damian Nadales > wrote: >> >> Hi, >> >> I was looking into free monads for designing a DSL for describing >> scenarios of the form: >> scenario = do >> aId <- createA >> b0Id <- createB id >> b1Id <- createB id >> link b0 b1 >> >> In our company we use a graph database, and currently we're setting up >> the test data using raw queries O.O So I wanted to come up with a >> better abstraction, and also enable us to do property based testing >> (by testing on random scenarios like the one above). >> >> Anyway, I ran into this presentation: >> http://www.slideshare.net/jdegoes/mtl-versus-free >> http://degoes.net/articles/modern-fp-part-2 >> >> In which monad transformers and free monads are compared. Do you have >> any experience using any of these approaches. If so would you mind >> sharing? ;) >> >> Thanks! >> Damian >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > From agocorona at gmail.com Mon Oct 17 15:46:12 2016 From: agocorona at gmail.com (Alberto G. Corona ) Date: Mon, 17 Oct 2016 17:46:12 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: <1476539382.1073.2.camel@joachim-breitner.de> References: <1476539382.1073.2.camel@joachim-breitner.de> Message-ID: There is a free monad benchmark: https://rawgit.com/feuerbach/freemonad-benchmark/master/results.html Not very good for the free monads. But it is done only with a single transformer and for a single state. I don´t know how the MTL performance degrades when the transformer stack grows. 2016-10-15 15:49 GMT+02:00 Joachim Breitner : > Hi, > > Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: > > Do you have > > any experience using any of these approaches. If so would you mind > > sharing? ;) > > I don’t have an answer to contribute, but I would be very interested in > hearing about experiences in terms of their relative runtime > performance. > > My gut feeling is that an an indirect function call for every (>>=), > with many calls to `lift` each time, would make a deep monad > transformer stack much more expensive. A free monad approach seems to > be more sensible to me. But maybe GHC is doing a better job optimizing > this than I would think? > > So if you have any number-supported evidence about this, possibly from > a real-world application where you tried to use one or the other, > please share it with us! > > Thanks, > Joachim > > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de • https://www.joachim-breitner.de/ > XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -- Alberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasufell at hasufell.de Mon Oct 17 18:06:31 2016 From: hasufell at hasufell.de (Julian) Date: Mon, 17 Oct 2016 20:06:31 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: <1476539382.1073.2.camel@joachim-breitner.de> References: <1476539382.1073.2.camel@joachim-breitner.de> Message-ID: <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> On 15/10/16 15:49, Joachim Breitner wrote: > Hi, > > Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: >> Do you have >> any experience using any of these approaches. If so would you mind >> sharing? ;) > > I don’t have an answer to contribute, but I would be very interested in > hearing about experiences in terms of their relative runtime > performance. > > My gut feeling is that an an indirect function call for every (>>=), > with many calls to `lift` each time, would make a deep monad > transformer stack much more expensive. A free monad approach seems to > be more sensible to me. But maybe GHC is doing a better job optimizing > this than I would think? > > So if you have any number-supported evidence about this, possibly from > a real-world application where you tried to use one or the other, > please share it with us! > There's a paper from Oleg discussing "Freer Monads, More Extensible Effects": http://okmij.org/ftp/Haskell/extensible/more.pdf The conclusion there seems to be that the EE approach is more "efficient". But you'll have to look at the concrete performance cases and data yourself to make a judgement. From simon at joyful.com Mon Oct 17 20:13:46 2016 From: simon at joyful.com (Simon Michael) Date: Mon, 17 Oct 2016 13:13:46 -0700 Subject: [Haskell-cafe] code.haskell.org down In-Reply-To: References: Message-ID: On 10/16/16 11:42 PM, Henning Thielemann wrote: > I moved my darcs-2 projects to hub.darcs.net but I have no good solution > for the many darcs-1 projects and files that are not under version > control. Hi Henning, just curious, why is converting those darcs-1 projects to darcs-2 format not a good solution ? I may have forgotten some reasons. -Simon From kazu at iij.ad.jp Tue Oct 18 06:47:52 2016 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Tue, 18 Oct 2016 15:47:52 +0900 (JST) Subject: [Haskell-cafe] conference rating for Haskell Symposium Message-ID: <20161018.154752.2174194936286988085.kazu@iij.ad.jp> Hello, I would like to know a reasonable conference rating for Haskell Symposium. Many rating pages have only Haskell Workshop. For instance, it is categorized as Rank C: http://lipn.univ-paris13.fr/~bennani/CSRank.html Should we ask such pages to register Haskell Symposium? --Kazu From harendra.kumar at gmail.com Tue Oct 18 09:20:43 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 18 Oct 2016 14:50:43 +0530 Subject: [Haskell-cafe] Easy cross platform CI testing of Haskell packages Message-ID: Hi, If as a package maintainer, you want to: * test your package with cabal & stack both * test your package on Linux, Mac, Windows * test your package on travis, appveyor or local machine in the same way * make sure that the _source dist_ that you are going to upload is tested * upload coverage report to coveralls.io * customize the build your way But you do not want the drudgery and the pain of writing elaborate shell scripts inside travis or appveyor yaml config and then debug them, then this script is for you. You just have to declare some environment variables in your build matrix and finally call this script and you are done. It works consistently the same way for all build types so you don't worry about whether all platforms are testing the same way or not. * Script: https://github.com/harendra-kumar/package-test * Travis Example: https://github.com/harendra-kumar/xls/blob/master/.travis.yml * Appveyor Example: https://github.com/harendra-kumar/xls/blob/master/appveyor.yml Feedback and suggestions are welcome! -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Tue Oct 18 09:39:38 2016 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 18 Oct 2016 11:39:38 +0200 Subject: [Haskell-cafe] 1TB vsize for all haskell processes? In-Reply-To: References: <87wph76ret.fsf@jaunder.io> Message-ID: <35025f22-b475-5204-1ffe-e628c75b44be@nh2.me> I, too, found this change a bit problematic btw: It means I can no longer run Haskell on systems where memory overcommit is disabled. For example, I used to run my shell with an appropriate `ulimit -v` to guarantee that a single program can't force me into swapping; I can no longer do that. From mithrandi at mithrandi.net Tue Oct 18 10:31:41 2016 From: mithrandi at mithrandi.net (Tristan Seligmann) Date: Tue, 18 Oct 2016 10:31:41 +0000 Subject: [Haskell-cafe] 1TB vsize for all haskell processes? In-Reply-To: <35025f22-b475-5204-1ffe-e628c75b44be@nh2.me> References: <87wph76ret.fsf@jaunder.io> <35025f22-b475-5204-1ffe-e628c75b44be@nh2.me> Message-ID: As far as I know, this behaviour should not be affected by overcommit as the unused pages are all mapped with PROT_NONE and thus do not count towards the commit limit as they cannot be used without the mapping being changed. `ulimit -v` (aka RLIMIT_AS) however limits the actual address space size, and so this does count towards that (as do mmap()ed files and other such virtual mappings that do not count towards the commit limit). On Tue, 18 Oct 2016 at 11:39 Niklas Hambüchen wrote: > I, too, found this change a bit problematic btw: It means I can no > longer run Haskell on systems where memory overcommit is disabled. > > For example, I used to run my shell with an appropriate `ulimit -v` to > guarantee that a single program can't force me into swapping; I can no > longer do that. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simons at nospf.cryp.to Tue Oct 18 11:55:26 2016 From: simons at nospf.cryp.to (Peter Simons) Date: Tue, 18 Oct 2016 13:55:26 +0200 Subject: [Haskell-cafe] Easy cross platform CI testing of Haskell packages References: Message-ID: <87funtlvm9.fsf@write-only.cryp.to> Hi Harendra, I've been a happy user of for a while, and that script generates fairly sophisticated build scripts. Now I wonder whether your solution has any advantages or disadvantages compared to that generator? Can anyone share some light on the respective traits of these solutions? Best regards, Peter From damian.nadales at gmail.com Tue Oct 18 12:08:52 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Tue, 18 Oct 2016 14:08:52 +0200 Subject: [Haskell-cafe] An use case of the Free-Monad: logging Message-ID: Hi, I'm trying to figure out how would you incorporate logging when using the free monad. I've made the following question to StackOverflow: http://stackoverflow.com/questions/40105759/logging-using-the-free-monad Maybe somebody have any ideas. Thanks, Damian. From ruben.astud at gmail.com Tue Oct 18 12:12:22 2016 From: ruben.astud at gmail.com (Ruben Astudillo) Date: Tue, 18 Oct 2016 09:12:22 -0300 Subject: [Haskell-cafe] tokenize parser combinators and free applicatives Message-ID: <45d2ea2d-c5aa-9988-4042-fcc2776befb4@gmail.com> Hi all on [1] it is said the following "Dealing with whitespace and comments is awkward in the parser; you need to wrap everything in a token combinator. (If you decide to do that, at least use a free applicative functor to ensure that you don’t forget to consume that whitespace)." Reading on `free` the def of Ap, frankly I don't see how can the free applicative be used for this. Anybody could drop a hint? I would appreciate it. [1]: https://ro-che.info/articles/2015-01-02-lexical-analysis -- Ruben From nickolay.kudasov at gmail.com Tue Oct 18 12:29:46 2016 From: nickolay.kudasov at gmail.com (Nickolay Kudasov) Date: Tue, 18 Oct 2016 12:29:46 +0000 Subject: [Haskell-cafe] tokenize parser combinators and free applicatives In-Reply-To: <45d2ea2d-c5aa-9988-4042-fcc2776befb4@gmail.com> References: <45d2ea2d-c5aa-9988-4042-fcc2776befb4@gmail.com> Message-ID: Hi Ruben, I imagine, free applicative would allow you to easily insert whitespace/comment eaters afterwards. For instance, say you have Parser applicative for parsing. Then Ap Parser would represent the same parser, but with parsing combinators separated with Ap constructors. You would use Ap Parser when defining your grammar. Then you could "intersperse" whitespace eaters in between the combinators and "retract" the resulting Ap Parser into just Parser. That would probably be a cleaner approach compared to having every combinator wrapped in trimWhiteSpacesAndComments combinator. Kind regards, Nick On Tue, 18 Oct 2016 at 15:12 Ruben Astudillo wrote: > Hi all > > on [1] it is said the following > > "Dealing with whitespace and comments is awkward in the parser; you > need to wrap everything in a token combinator. (If you decide to do > that, at least use a free applicative functor to ensure that you > don’t forget to consume that whitespace)." > > Reading on `free` the def of Ap, frankly I don't see how can the free > applicative be used for this. Anybody could drop a hint? I would > appreciate it. > > [1]: https://ro-che.info/articles/2015-01-02-lexical-analysis > -- Ruben > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Tue Oct 18 12:40:48 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 18 Oct 2016 18:10:48 +0530 Subject: [Haskell-cafe] Easy cross platform CI testing of Haskell packages In-Reply-To: <87funtlvm9.fsf@write-only.cryp.to> References: <87funtlvm9.fsf@write-only.cryp.to> Message-ID: Hi Peter, I was not aware of that script, thanks for the pointer. multi-ghc-travis is more high level as it generates the travis config using the cabal file's tested-with field, something that was on my todo list. My script is a shell script to be invoked from a travis/appveyor.yml config. It can potentially be used in place of the shell snippets generated by multi-ghc-travis. So I can compare the shell part of the two, some differences that I can see on a quick look: * package-test is more general, it works for windows as well * It can be run easily on your local machine as well which is convenient for debugging any failures or if you just want to test on local machine instead of travis. * It supports cabal as well as stack for testing * It tests from source distribution to make sure the generated source dist is not broken * It has a simple knob to send coverage info to coveralls.io -harendra On 18 October 2016 at 17:25, Peter Simons wrote: > Hi Harendra, > > I've been a happy user of for a > while, and that script generates fairly sophisticated build scripts. Now I > wonder whether your solution has any advantages or disadvantages compared > to that generator? Can anyone share some light on the respective traits of > these solutions? > > Best regards, > Peter > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From trupill at gmail.com Tue Oct 18 12:53:28 2016 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Tue, 18 Oct 2016 14:53:28 +0200 Subject: [Haskell-cafe] Obtaining location of subexpressions from Template Haskell Message-ID: Dear Haskell-café, I am trying to write a small TH module which manipulates Haskell code. Basically, I have a function `transform :: Exp -> Q Exp` which I call this way: > g = $(transform [| map (+1) [1,2,3] |]) Since this is a source-to-source transformation, I would like to generate also {-# LINE ... #-} pragmas to point any error back to their original location. My question is: is there any way to obtain the location of sub-expressions inside a `Exp` or `Q Exp`? Thanks in advance, Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From branimir.maksimovic at gmail.com Tue Oct 18 13:53:58 2016 From: branimir.maksimovic at gmail.com (Branimir Maksimovic) Date: Tue, 18 Oct 2016 15:53:58 +0200 Subject: [Haskell-cafe] 1TB vsize for all haskell processes? In-Reply-To: References: <87wph76ret.fsf@jaunder.io> <35025f22-b475-5204-1ffe-e628c75b44be@nh2.me> Message-ID: Indeed, I have overcommit 48GB max. No problem running haskell programs. On 10/18/2016 12:31 PM, Tristan Seligmann wrote: > As far as I know, this behaviour should not be affected by overcommit > as the unused pages are all mapped with PROT_NONE and thus do not > count towards the commit limit as they cannot be used without the > mapping being changed. `ulimit -v` (aka RLIMIT_AS) however limits the > actual address space size, and so this does count towards that (as do > mmap()ed files and other such virtual mappings that do not count > towards the commit limit). > > On Tue, 18 Oct 2016 at 11:39 Niklas Hambüchen > wrote: > > I, too, found this change a bit problematic btw: It means I can no > longer run Haskell on systems where memory overcommit is disabled. > > For example, I used to run my shell with an appropriate `ulimit -v` to > guarantee that a single program can't force me into swapping; I can no > longer do that. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Tue Oct 18 14:26:18 2016 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 18 Oct 2016 16:26:18 +0200 Subject: [Haskell-cafe] 1TB vsize for all haskell processes? In-Reply-To: References: <87wph76ret.fsf@jaunder.io> <35025f22-b475-5204-1ffe-e628c75b44be@nh2.me> Message-ID: <334fd184-d954-0108-af23-3927739afcf8@nh2.me> Hi Tristan and Branimir, this is interesting, I didn't know that ulimit makes this distinction and that the way GHC maps the pages doesn't count towards the accounting limit (the limit as described e.g. on https://www.etalabs.net/overcommit.html in the section starting with "The approach taken in reality"). Is it possible to set the the overcommit settings per shell / per program like `ulimit -v` allows? On 18/10/16 12:31, Tristan Seligmann wrote: > As far as I know, this behaviour should not be affected by overcommit as > the unused pages are all mapped with PROT_NONE and thus do not count > towards the commit limit as they cannot be used without the mapping > being changed. `ulimit -v` (aka RLIMIT_AS) however limits the actual > address space size, and so this does count towards that (as do mmap()ed > files and other such virtual mappings that do not count towards the > commit limit). From ruben.astud at gmail.com Tue Oct 18 21:06:01 2016 From: ruben.astud at gmail.com (Ruben Astudillo) Date: Tue, 18 Oct 2016 18:06:01 -0300 Subject: [Haskell-cafe] tokenize parser combinators and free applicatives In-Reply-To: References: <45d2ea2d-c5aa-9988-4042-fcc2776befb4@gmail.com> Message-ID: On 18/10/16 09:29, Nickolay Kudasov wrote: > Hi Ruben, > > I imagine, free applicative would allow you to easily insert > whitespace/comment eaters afterwards. For instance, say you have > Parser applicative for parsing. Then Ap Parser would represent the > same parser, but with parsing combinators separated with Ap > constructors. You would use Ap Parser when defining your grammar. Then > you could "intersperse" whitespace eaters in between the combinators > and "retract" the resulting Ap Parser into just Parser. That would > probably be a cleaner approach compared to having every combinator > wrapped in trimWhiteSpacesAndComments combinator. I got the idea of what you said. Even if I want to intersperse `space` in the downgrade I end up with stuff on the wrong order. Probably has to do with the fact that Parsec already has a Applicative instance which I am using, instead of just Functor. import Text.Parsec import Control.Applicative hiding (many) import Control.Applicative.Free {- prints: Left (line 1, column 1): unexpected "h" expecting space -} main :: IO () main = print $ example2 -- Works example :: Either ParseError String example = parse query "" "hi number 5" where query = many letter *> space *> many letter *> space *> many digit -- Works in wrong order example2 :: Either ParseError String example2 = parse (down query) "" "hi number 5" where query = liftAp (many letter) *> liftAp (many letter) *> liftAp (many digit) down :: Ap (Parsec String u0) a -> Parsec String u0 a down (Pure a) = pure a down (Ap fa ap) = down ap <* space <*> fa {- to help understanding instance Applicative (Ap f) where pure = Pure Pure f <*> y = fmap f y Ap x y <*> z = Ap x (flip <$> y <*> z) Ap x y :: (Ap f (a -> b)) z :: (Ap f a) x :: (f c) y :: (Ap f (c -> a -> b)) flip <$> y <*> z :: Ap f (c -> b) flip <$> y :: Ap f (a -> c -> b) liftAp space :: Ap Parser Char liftAp space = Ap space (Pure id) liftAp letter :: Ap Parser Char liftAp letter = Ap letter (Pure id) liftAp space *> liftAp letter :: Ap Parser Char = (id <$ (Ap space (Pure id))) <*> Ap letter (Pure id) = Ap space (Pure (const id)) <*> Ap letter (Pure id) = Ap space (flip <$> (Pure (const id)) <*> Ap letter (Pure id)) = Ap space ( Pure (\a _ -> a)) <*> Ap letter (Pure id) ) = Ap space ( Ap letter (Pure (const id)) ) -} -- -- Ruben From ruben.astud at gmail.com Tue Oct 18 21:23:51 2016 From: ruben.astud at gmail.com (Ruben Astudillo) Date: Tue, 18 Oct 2016 18:23:51 -0300 Subject: [Haskell-cafe] tokenize parser combinators and free applicatives In-Reply-To: References: <45d2ea2d-c5aa-9988-4042-fcc2776befb4@gmail.com> Message-ID: <39e14d7e-46b6-ce88-7f07-06b37715ef26@gmail.com> On 18/10/16 18:06, Ruben Astudillo wrote: > -- Works in wrong order > example2 :: Either ParseError String > example2 = parse (down query) "" "hi number 5" > where > query = liftAp (many letter) > *> liftAp (many letter) > *> liftAp (many digit) > I got it! I had to use the `runAp` combinator that the `free` package offered. Changing to this version, of the function above in the previous code, makes it work as intend. Sorry for bothering you! example2 :: Either ParseError String example2 = parse (runAp (\f -> f <* skipMany space) query) "" "hi number 5" where query = liftAp (many letter) *> liftAp (many letter) *> liftAp (many digit) -- -- Ruben From bence.kodaj at gmail.com Wed Oct 19 12:13:04 2016 From: bence.kodaj at gmail.com (Bence Kodaj) Date: Wed, 19 Oct 2016 14:13:04 +0200 Subject: [Haskell-cafe] Why does [1.0, 3 ..4] contain 5? Message-ID: Hi all, Does anybody happen to know why [1.0, 3 ..4 ] is [1.0, 3.0, 5.0] ? I do realize I'm not supposed to use enumerated lists with doubles, so this is just a question out of pure curiosity. I ran into this example accidentally, and I find it counter-intuitive - I would naively expect that [x, y .. z] does not contain elements greater than z (assuming x < y < z). The root cause of why [1.0, 3 .. 4] contains 5.0 is that in the Enum instances for Double and Float, enumFromThenTo is defined like this: numericEnumFromThenTo e1 e2 e3 = takeWhile predicate (numericEnumFromThen e1 e2 ) where mid = (e2 - e1 ) / 2 predicate | e2 >= e1 = (<= e3 + mid ) | otherwise = (>= e3 + mid ) and with the concrete values in the example, the predicate becomes (<=5.0). My question is this: why can't we simply use (<= e3) as the predicate? Why is the upper limit (e3) increased by half of the length of the e1 .. e2 interval (mid)? Can someone give an example where using (<=e3) as predicate would give a bad result? I'm guessing that the answer has something to do with the quirks of floating-point arithmetic (rounding etc.), of which I'm not an expert at all :) Regards, Bence -------------- next part -------------- An HTML attachment was scrubbed... URL: From damian.nadales at gmail.com Wed Oct 19 12:29:38 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Wed, 19 Oct 2016 14:29:38 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: I was thinking, besides the evaluation of performance, the simplicity of the approach is also important ("developer time is more expensive than CPU time" anyone?). Note that I said simple and not easy ;) I guess this aspect is a rather subjective one, but maybe there are elements that can be intuitively quantified. Right now I'm playing with free monads and MTL, to have an idea which one seems simpler to me. On Mon, Oct 17, 2016 at 8:06 PM, Julian wrote: > On 15/10/16 15:49, Joachim Breitner wrote: >> Hi, >> >> Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: >>> Do you have >>> any experience using any of these approaches. If so would you mind >>> sharing? ;) >> >> I don’t have an answer to contribute, but I would be very interested in >> hearing about experiences in terms of their relative runtime >> performance. >> >> My gut feeling is that an an indirect function call for every (>>=), >> with many calls to `lift` each time, would make a deep monad >> transformer stack much more expensive. A free monad approach seems to >> be more sensible to me. But maybe GHC is doing a better job optimizing >> this than I would think? >> >> So if you have any number-supported evidence about this, possibly from >> a real-world application where you tried to use one or the other, >> please share it with us! >> > > There's a paper from Oleg discussing "Freer Monads, More Extensible > Effects": http://okmij.org/ftp/Haskell/extensible/more.pdf > > The conclusion there seems to be that the EE approach is more > "efficient". But you'll have to look at the concrete performance cases > and data yourself to make a judgement. > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From toad3k at gmail.com Wed Oct 19 12:56:56 2016 From: toad3k at gmail.com (David McBride) Date: Wed, 19 Oct 2016 08:56:56 -0400 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: I'll say that this conversation caused me to rewrite some networking code I had written in a free monad as mtl. It is faster and simpler, right? But transformer stacks with all their typeclasses are really hard to work with and I don't think it was worth it in the end. On Wed, Oct 19, 2016 at 8:29 AM, Damian Nadales wrote: > I was thinking, besides the evaluation of performance, the simplicity > of the approach is also important ("developer time is more expensive > than CPU time" anyone?). Note that I said simple and not easy ;) > > I guess this aspect is a rather subjective one, but maybe there are > elements that can be intuitively quantified. Right now I'm playing > with free monads and MTL, to have an idea which one seems simpler to > me. > > > > On Mon, Oct 17, 2016 at 8:06 PM, Julian wrote: > > On 15/10/16 15:49, Joachim Breitner wrote: > >> Hi, > >> > >> Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: > >>> Do you have > >>> any experience using any of these approaches. If so would you mind > >>> sharing? ;) > >> > >> I don’t have an answer to contribute, but I would be very interested in > >> hearing about experiences in terms of their relative runtime > >> performance. > >> > >> My gut feeling is that an an indirect function call for every (>>=), > >> with many calls to `lift` each time, would make a deep monad > >> transformer stack much more expensive. A free monad approach seems to > >> be more sensible to me. But maybe GHC is doing a better job optimizing > >> this than I would think? > >> > >> So if you have any number-supported evidence about this, possibly from > >> a real-world application where you tried to use one or the other, > >> please share it with us! > >> > > > > There's a paper from Oleg discussing "Freer Monads, More Extensible > > Effects": http://okmij.org/ftp/Haskell/extensible/more.pdf > > > > The conclusion there seems to be that the EE approach is more > > "efficient". But you'll have to look at the concrete performance cases > > and data yourself to make a judgement. > > > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexey.raga at gmail.com Wed Oct 19 13:13:03 2016 From: alexey.raga at gmail.com (Alexey Raga) Date: Wed, 19 Oct 2016 13:13:03 +0000 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: Working with stacks like "ReaderT AppConfig (StateT AppState IO)" may be hard, but using MonadIO, MonadState, MonadReader etc. is much simpler. This session explains it well: https://www.youtube.com/watch?v=GZPup5Iuaqw I haven't yet tried Oleg's approach with Freer monads and extensible effects though, it looks very interesting. Especially when creating new types of effects because writing an "interpreter" seems easier than writing something like "MyEffectT / MyEffectMonad" and giving it all the required instances. On Wed, Oct 19, 2016 at 11:57 PM David McBride wrote: > I'll say that this conversation caused me to rewrite some networking code > I had written in a free monad as mtl. It is faster and simpler, right? > But transformer stacks with all their typeclasses are really hard to work > with and I don't think it was worth it in the end. > > On Wed, Oct 19, 2016 at 8:29 AM, Damian Nadales > wrote: > > I was thinking, besides the evaluation of performance, the simplicity > of the approach is also important ("developer time is more expensive > than CPU time" anyone?). Note that I said simple and not easy ;) > > I guess this aspect is a rather subjective one, but maybe there are > elements that can be intuitively quantified. Right now I'm playing > with free monads and MTL, to have an idea which one seems simpler to > me. > > > > On Mon, Oct 17, 2016 at 8:06 PM, Julian wrote: > > On 15/10/16 15:49, Joachim Breitner wrote: > >> Hi, > >> > >> Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: > >>> Do you have > >>> any experience using any of these approaches. If so would you mind > >>> sharing? ;) > >> > >> I don’t have an answer to contribute, but I would be very interested in > >> hearing about experiences in terms of their relative runtime > >> performance. > >> > >> My gut feeling is that an an indirect function call for every (>>=), > >> with many calls to `lift` each time, would make a deep monad > >> transformer stack much more expensive. A free monad approach seems to > >> be more sensible to me. But maybe GHC is doing a better job optimizing > >> this than I would think? > >> > >> So if you have any number-supported evidence about this, possibly from > >> a real-world application where you tried to use one or the other, > >> please share it with us! > >> > > > > There's a paper from Oleg discussing "Freer Monads, More Extensible > > Effects": http://okmij.org/ftp/Haskell/extensible/more.pdf > > > > The conclusion there seems to be that the EE approach is more > > "efficient". But you'll have to look at the concrete performance cases > > and data yourself to make a judgement. > > > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From damian.nadales at gmail.com Wed Oct 19 13:24:13 2016 From: damian.nadales at gmail.com (Damian Nadales) Date: Wed, 19 Oct 2016 15:24:13 +0200 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: BTW, when making the distinction between simple and easy I was referring to this talk: https://www.infoq.com/presentations/Simple-Made-Easy So when I say I would prefer a simple approach, I say that I won't mind having to spend one month to understand it, as long as I can write "sustainable programs" (reasonably efficient, extensible, maitainable, robust, etc). On Wed, Oct 19, 2016 at 3:13 PM, Alexey Raga wrote: > Working with stacks like "ReaderT AppConfig (StateT AppState IO)" may be > hard, but using MonadIO, MonadState, MonadReader etc. is much simpler. > > This session explains it well: https://www.youtube.com/watch?v=GZPup5Iuaqw > > I haven't yet tried Oleg's approach with Freer monads and extensible effects > though, it looks very interesting. Especially when creating new types of > effects because writing an "interpreter" seems easier than writing something > like "MyEffectT / MyEffectMonad" and giving it all the required instances. > > > On Wed, Oct 19, 2016 at 11:57 PM David McBride wrote: >> >> I'll say that this conversation caused me to rewrite some networking code >> I had written in a free monad as mtl. It is faster and simpler, right? But >> transformer stacks with all their typeclasses are really hard to work with >> and I don't think it was worth it in the end. >> >> On Wed, Oct 19, 2016 at 8:29 AM, Damian Nadales >> wrote: >>> >>> I was thinking, besides the evaluation of performance, the simplicity >>> of the approach is also important ("developer time is more expensive >>> than CPU time" anyone?). Note that I said simple and not easy ;) >>> >>> I guess this aspect is a rather subjective one, but maybe there are >>> elements that can be intuitively quantified. Right now I'm playing >>> with free monads and MTL, to have an idea which one seems simpler to >>> me. >>> >>> >>> >>> On Mon, Oct 17, 2016 at 8:06 PM, Julian wrote: >>> > On 15/10/16 15:49, Joachim Breitner wrote: >>> >> Hi, >>> >> >>> >> Am Freitag, den 14.10.2016, 17:35 +0200 schrieb Damian Nadales: >>> >>> Do you have >>> >>> any experience using any of these approaches. If so would you mind >>> >>> sharing? ;) >>> >> >>> >> I don’t have an answer to contribute, but I would be very interested >>> >> in >>> >> hearing about experiences in terms of their relative runtime >>> >> performance. >>> >> >>> >> My gut feeling is that an an indirect function call for every (>>=), >>> >> with many calls to `lift` each time, would make a deep monad >>> >> transformer stack much more expensive. A free monad approach seems to >>> >> be more sensible to me. But maybe GHC is doing a better job optimizing >>> >> this than I would think? >>> >> >>> >> So if you have any number-supported evidence about this, possibly from >>> >> a real-world application where you tried to use one or the other, >>> >> please share it with us! >>> >> >>> > >>> > There's a paper from Oleg discussing "Freer Monads, More Extensible >>> > Effects": http://okmij.org/ftp/Haskell/extensible/more.pdf >>> > >>> > The conclusion there seems to be that the EE approach is more >>> > "efficient". But you'll have to look at the concrete performance cases >>> > and data yourself to make a judgement. >>> > >>> > _______________________________________________ >>> > Haskell-Cafe mailing list >>> > To (un)subscribe, modify options or view archives go to: >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> > Only members subscribed via the mailman list are allowed to post. >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From will.yager at gmail.com Wed Oct 19 17:17:46 2016 From: will.yager at gmail.com (Will Yager) Date: Wed, 19 Oct 2016 12:17:46 -0500 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: Can anyone comment on the use of Purescript-style effect monads as compared to MTL and Free? While I have not used them in practice, they seem to express the "intent" of monad composition a bit more directly than the approaches we use in Haskell. Cheers, Will From cma at bitemyapp.com Wed Oct 19 17:26:04 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Wed, 19 Oct 2016 12:26:04 -0500 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: It's not really more direct. It's an unordered collection of effects you can use. IME it's a less efficient mtl-style, but YMMV. Taking an example from a PureScript tutorial: func :: Eff (console :: CONSOLE, random :: RANDOM) Unit Can just as easily be: func :: (MonadConsole m, MonadGimmeRandom m) => m () (mangled name so it doesn't overlap with a real class) There are other differences, but they haven't amounted to much for me yet. Kmett's Quine has a good example of some homespun mtl-style: https://github.com/ekmett/quine On Wed, Oct 19, 2016 at 12:17 PM, Will Yager wrote: > Can anyone comment on the use of Purescript-style effect monads as compared to MTL and Free? While I have not used them in practice, they seem to express the "intent" of monad composition a bit more directly than the approaches we use in Haskell. > > Cheers, > Will > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Chris Allen Currently working on http://haskellbook.com From will.yager at gmail.com Wed Oct 19 18:41:52 2016 From: will.yager at gmail.com (Will Yager) Date: Wed, 19 Oct 2016 13:41:52 -0500 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: It seems that there are several advantages to the Purescript approach. For example, this catchException :: forall a e . (Error -> Eff e a) -> Eff (err :: EXCEPTION | e) a -> Eff e a would be unwieldy to express using typeclasses, requiring at least three constraints. I also find this style easier to read than constraints, as it requires no mental substitution. E.g. if I see (Foo m, Bar m, Bar n) => Baz -> -> m a When I get to the end of the type, I have to go back to the beginning to figure out what m is. I can't read left-to-right. This happens a lot with constraint-based monad composition. Another advantage is that the Purescipt example uses a concrete type, which is often easier to reason about than "ad-hoc" typeclass abstractions like MonadRandom. However, it looks like you still get the flexibility of ad-hoc typeclasses, because you get to pick any function that discharges the effect type in the given effect monad. Like I said, I have not used it, but these are what I've noticed from topical observation. Apologies for the formatting; copying that code example appears to have confused the iOS mail app. Cheers, Will > On Oct 19, 2016, at 12:26, Christopher Allen wrote: > > It's not really more direct. It's an unordered collection of effects > you can use. IME it's a less efficient mtl-style, but YMMV. > > Taking an example from a PureScript tutorial: > > func :: Eff (console :: CONSOLE, random :: RANDOM) Unit > > Can just as easily be: > > func :: (MonadConsole m, MonadGimmeRandom m) => m () > > (mangled name so it doesn't overlap with a real class) > > There are other differences, but they haven't amounted to much for me yet. > > Kmett's Quine has a good example of some homespun mtl-style: > https://github.com/ekmett/quine > >> On Wed, Oct 19, 2016 at 12:17 PM, Will Yager wrote: >> Can anyone comment on the use of Purescript-style effect monads as compared to MTL and Free? While I have not used them in practice, they seem to express the "intent" of monad composition a bit more directly than the approaches we use in Haskell. >> >> Cheers, >> Will >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > > > -- > Chris Allen > Currently working on http://haskellbook.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvio.frischi at gmail.com Wed Oct 19 19:16:49 2016 From: silvio.frischi at gmail.com (Silvio Frischknecht) Date: Wed, 19 Oct 2016 21:16:49 +0200 Subject: [Haskell-cafe] Why does [1.0, 3 ..4] contain 5? In-Reply-To: References: Message-ID: Hi, I don't claim to know the real reason. But I can see that it would make some things more stable or rather it makes the stable versions look nicer. [1,1.1 .. 2] looks better than [1,1.1 .. 2.05] If you define it like you suggest numericEnumFromThenTo e1 e2 e3 = takeWhile (<=e1) [e2, e3 ..] :: [Float] I get length $ numericEnumFromThenTo 1 1.1 2 === 10 length $ numericEnumFromThenTo 2 1.2 3 === 11 Cheers, Silvio From silvio.frischi at gmail.com Wed Oct 19 19:21:26 2016 From: silvio.frischi at gmail.com (Silvio Frischknecht) Date: Wed, 19 Oct 2016 21:21:26 +0200 Subject: [Haskell-cafe] Why does [1.0, 3 ..4] contain 5? In-Reply-To: References: Message-ID: Sorry I must be a bit tired :) On 10/19/2016 09:16 PM, Silvio Frischknecht wrote: > Hi, > > I don't claim to know the real reason. But I can see that it would make > some things more stable or rather it makes the stable versions look nicer. > > [1,1.1 .. 2] looks better than > [1,1.1 .. 2.05] > > If you define it like you suggest > > numericEnumFromThenTo e1 e2 e3 = takeWhile (<=e1) [e2, e3 ..] :: [Float] numericEnumFromThenTo e1 e2 e3 = takeWhile (<=e3) [e1, e2 ..] :: [Float] > > I get > > length $ numericEnumFromThenTo 1 1.1 2 === 10 > length $ numericEnumFromThenTo 2 1.2 3 === 11 length $ numericEnumFromThenTo 2 2.1 3 == 11 > > > > Cheers, > Silvio > From allbery.b at gmail.com Wed Oct 19 19:24:29 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 19 Oct 2016 15:24:29 -0400 Subject: [Haskell-cafe] Why does [1.0, 3 ..4] contain 5? In-Reply-To: References: Message-ID: On Wed, Oct 19, 2016 at 8:13 AM, Bence Kodaj wrote: > Does anybody happen to know why [1.0, 3 ..4 ] is [1.0, 3.0, 5.0] ? > Nobody seems to know, aside from "that's what the Libraries part of the Report says". You'd probably have to find the committee that added it to the Report (good luck...) to learn their logic. (The quirks of FP arithmetic don't seem to be involved, since the overshoot is overkill for that.) -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyarion at iinet.net.au Wed Oct 19 19:51:05 2016 From: hyarion at iinet.net.au (Ben Mellor) Date: Thu, 20 Oct 2016 06:51:05 +1100 Subject: [Haskell-cafe] Why does [1.0, 3 ..4] contain 5? In-Reply-To: References: Message-ID: My understanding is that it was intended to support enumerations where you write the exact end point, like [1.0, 1.1 .. 2.3] Because of floating point quirks, this could end up generating a value close to 2.3 that's slightly larger than the value close to 2.3 constructed by directly converting the decimal expression into a floating point number. You need to allow some "slack" in the upper bound to guarantee a value close to the written end point appears in the list. Increasing the bound by half a delta means there will be something close to 2.3 in the list, and shouldn't go far enough to include the next greater element generated. I suppose by that reasoning you're "not supposed" to use enumerations like [1.0, 3 .. 4], because 4 isn't the idealised mathematical endpoint of the sequence. On October 20, 2016 6:24:29 AM GMT+11:00, Brandon Allbery wrote: >On Wed, Oct 19, 2016 at 8:13 AM, Bence Kodaj >wrote: > >> Does anybody happen to know why [1.0, 3 ..4 ] is [1.0, 3.0, 5.0] ? >> > >Nobody seems to know, aside from "that's what the Libraries part of the >Report says". You'd probably have to find the committee that added it >to >the Report (good luck...) to learn their logic. (The quirks of FP >arithmetic don't seem to be involved, since the overshoot is overkill >for >that.) > >-- >brandon s allbery kf8nh sine nomine >associates >allbery.b at gmail.com >ballbery at sinenomine.net >unix, openafs, kerberos, infrastructure, xmonad >http://sinenomine.net > > >------------------------------------------------------------------------ > >_______________________________________________ >Haskell-Cafe mailing list >To (un)subscribe, modify options or view archives go to: >http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma at bitemyapp.com Wed Oct 19 20:19:03 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Wed, 19 Oct 2016 15:19:03 -0500 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: With constraint kinds, you can make type synonyms that expand to constraints, solving your syntactic objection, but I don't want to discuss those further. Far as error handling goes, I use MonadCatch/MonadError: https://hackage.haskell.org/package/exceptions-0.8.3/docs/Control-Monad-Catch.html Here catch is: catch :: (Exception e, MonadCatch m) => m a -> (e -> m a) -> m a Catch implies Throw, so. This part, >Another advantage is that the Purescipt example uses a concrete type I'm not sure I understand. What are you saying is concrete here? You still have a row of effects which could be any valid handler. I don't think I understand the distinction you're making. Eff itself is parameterized by the row of effects (# !) and the return type (*), so there's nothing especially "concrete" about that to my mind as contrasted with something instantiable as any Monad known to have a Console API and a means of furnishing random values. On Wed, Oct 19, 2016 at 1:41 PM, Will Yager wrote: > It seems that there are several advantages to the Purescript approach. > > For example, this > > catchException > :: forall a e > . (Error -> Eff e a) > -> Eff (err :: EXCEPTION | e) a > -> Eff e a > > would be unwieldy to express using typeclasses, requiring at least three > constraints. I also find this style easier to read than constraints, as it > requires no mental substitution. E.g. if I see > > (Foo m, Bar m, Bar n) => Baz -> -> m a > > When I get to the end of the type, I have to go back to the beginning to > figure out what m is. I can't read left-to-right. This happens a lot with > constraint-based monad composition. > > Another advantage is that the Purescipt example uses a concrete type, which > is often easier to reason about than "ad-hoc" typeclass abstractions like > MonadRandom. However, it looks like you still get the flexibility of ad-hoc > typeclasses, because you get to pick any function that discharges the effect > type in the given effect monad. > > Like I said, I have not used it, but these are what I've noticed from > topical observation. > > Apologies for the formatting; copying that code example appears to have > confused the iOS mail app. > > Cheers, > > Will > > > > On Oct 19, 2016, at 12:26, Christopher Allen wrote: > > It's not really more direct. It's an unordered collection of effects > you can use. IME it's a less efficient mtl-style, but YMMV. > > Taking an example from a PureScript tutorial: > > func :: Eff (console :: CONSOLE, random :: RANDOM) Unit > > Can just as easily be: > > func :: (MonadConsole m, MonadGimmeRandom m) => m () > > (mangled name so it doesn't overlap with a real class) > > There are other differences, but they haven't amounted to much for me yet. > > Kmett's Quine has a good example of some homespun mtl-style: > https://github.com/ekmett/quine > > On Wed, Oct 19, 2016 at 12:17 PM, Will Yager wrote: > > Can anyone comment on the use of Purescript-style effect monads as compared > to MTL and Free? While I have not used them in practice, they seem to > express the "intent" of monad composition a bit more directly than the > approaches we use in Haskell. > > > Cheers, > > Will > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > > > > > -- > Chris Allen > Currently working on http://haskellbook.com -- Chris Allen Currently working on http://haskellbook.com From rwallace at thewallacepack.net Wed Oct 19 20:47:37 2016 From: rwallace at thewallacepack.net (Richard Wallace) Date: Wed, 19 Oct 2016 13:47:37 -0700 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: I would call the approach in quine a lensy-mtl style. It's ok as far as it goes, but since you are using concrete environment values it isn't great if you want to do testing of things like database code without having a "real" backend hooked up. The typical approach then is to create your own type-class and instances class MyBackend where ... instance (MonadReader r m, HasDb r) => MyBackend m where ... instance (MonadState s m, HasTestState s) => MyBackend m where ... Of course, now our problem is that our module with this abstraction depends on the module with the db and the test state. Unless we create orphan instances, which I prefer to avoid. This is one area where I like the Free monad approach more because the interpreter can be built and composed with other interpreters in completely separate modules or packages because they are just values. Rich PS for the record, I don't strongly prefer the mtl style or the free monad style, I think they each have good qualities and bad and which one I choose tends to depend on other factors. On Wed, Oct 19, 2016 at 10:26 AM, Christopher Allen wrote: > It's not really more direct. It's an unordered collection of effects > you can use. IME it's a less efficient mtl-style, but YMMV. > > Taking an example from a PureScript tutorial: > > func :: Eff (console :: CONSOLE, random :: RANDOM) Unit > > Can just as easily be: > > func :: (MonadConsole m, MonadGimmeRandom m) => m () > > (mangled name so it doesn't overlap with a real class) > > There are other differences, but they haven't amounted to much for me yet. > > Kmett's Quine has a good example of some homespun mtl-style: > https://github.com/ekmett/quine > > On Wed, Oct 19, 2016 at 12:17 PM, Will Yager wrote: > > Can anyone comment on the use of Purescript-style effect monads as > compared to MTL and Free? While I have not used them in practice, they seem > to express the "intent" of monad composition a bit more directly than the > approaches we use in Haskell. > > > > Cheers, > > Will > > _______________________________________________ > > Haskell-Cafe mailing list > > To (un)subscribe, modify options or view archives go to: > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > Only members subscribed via the mailman list are allowed to post. > > > > -- > Chris Allen > Currently working on http://haskellbook.com > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma at bitemyapp.com Wed Oct 19 20:53:22 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Wed, 19 Oct 2016 15:53:22 -0500 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: Oh fair enough, I usually call the partially concreted or lensy style "mtl-style" and then call MonadReader/MonadState "mtl library". I lean to the former and usually use the latter if it discharges something I need right then and there. I've only used Free for something I needed to aggressively simulate/mock, and even then, some production uses of Free have gotten defenestrated in favor of something more ordinary. On Wed, Oct 19, 2016 at 3:47 PM, Richard Wallace wrote: > I would call the approach in quine a lensy-mtl style. It's ok as far as it > goes, but since you are using concrete environment values it isn't great if > you want to do testing of things like database code without having a "real" > backend hooked up. The typical approach then is to create your own > type-class and instances > > class MyBackend where ... > > instance (MonadReader r m, HasDb r) => MyBackend m where ... > > instance (MonadState s m, HasTestState s) => MyBackend m where ... > > Of course, now our problem is that our module with this abstraction depends > on the module with the db and the test state. Unless we create orphan > instances, which I prefer to avoid. This is one area where I like the Free > monad approach more because the interpreter can be built and composed with > other interpreters in completely separate modules or packages because they > are just values. > > Rich > > PS for the record, I don't strongly prefer the mtl style or the free monad > style, I think they each have good qualities and bad and which one I choose > tends to depend on other factors. > > On Wed, Oct 19, 2016 at 10:26 AM, Christopher Allen > wrote: >> >> It's not really more direct. It's an unordered collection of effects >> you can use. IME it's a less efficient mtl-style, but YMMV. >> >> Taking an example from a PureScript tutorial: >> >> func :: Eff (console :: CONSOLE, random :: RANDOM) Unit >> >> Can just as easily be: >> >> func :: (MonadConsole m, MonadGimmeRandom m) => m () >> >> (mangled name so it doesn't overlap with a real class) >> >> There are other differences, but they haven't amounted to much for me yet. >> >> Kmett's Quine has a good example of some homespun mtl-style: >> https://github.com/ekmett/quine >> >> On Wed, Oct 19, 2016 at 12:17 PM, Will Yager wrote: >> > Can anyone comment on the use of Purescript-style effect monads as >> > compared to MTL and Free? While I have not used them in practice, they seem >> > to express the "intent" of monad composition a bit more directly than the >> > approaches we use in Haskell. >> > >> > Cheers, >> > Will >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > To (un)subscribe, modify options or view archives go to: >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > Only members subscribed via the mailman list are allowed to post. >> >> >> >> -- >> Chris Allen >> Currently working on http://haskellbook.com >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > -- Chris Allen Currently working on http://haskellbook.com From will.yager at gmail.com Wed Oct 19 22:03:27 2016 From: will.yager at gmail.com (William Yager) Date: Wed, 19 Oct 2016 17:03:27 -0500 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: On Wed, Oct 19, 2016 at 3:19 PM, Christopher Allen wrote: > > Here catch is: > > catch :: (Exception e, MonadCatch m) => m a -> (e -> m a) -> m a > > In my mind, this is inferior to the Purescript example, because you have not discharged the MonadCatch instance. That is to say, the result monad continues to have a MonadCatch instance, even though we may want to explicitly express that we have *already* caught any exceptions, so the result type need not be a MonadCatch. In the Purescript example, the result monad no longer has the EXCEPTION effect, so you have demonstrated statically that the exception has already been caught! This is very useful. You can't express this in the style you're advocating (which, to be clear, is the style I use most of the time). In other words, the purescript approach has the benefits of both explicitly enumerating the entire monad transformer stack (you can statically determine exactly which effects are possible *and* which are not, and you can discharge individual layers of the stack) and of using typeclass constraints (you aren't typing out redundant information and you don't have to specify where in the monad transformer stack the effect is handled). It's the best of both worlds. > > >Another advantage is that the Purescipt example uses a concrete type > > I'm not sure I understand. What are you saying is concrete here? I should have been more clear; the type is polymorphic, but does not have any constraints outside of the type. This isn't a formal advantage so much as a psychological/syntactic advantage. As I said, it's inconvenient to have to mentally sub in constraints on the LHS of the "=>" into the type variables on the RHS of the "=>". The Purescript approach is easier to read for me, even though I've been using the typeclass approach for years. Like I said, I haven't actually used Purescript, so I'm sure there are additional use cases I'm missing. But this is what stands out. Cheers, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at cs.otago.ac.nz Thu Oct 20 00:57:14 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Thu, 20 Oct 2016 13:57:14 +1300 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: TL;DR - Haskell mistaken for pseudo-code, a case study on machine- checked specification for use in standards can't or won't be read by the people who need it (who aren't the people in this mailing list). A couple of years ago, while trying to implement a programming language with >30 years of use and an actual ANSI standard, I encountered a gap in the standard where an aspect of arithmetic was referred to the Language Independent Arithmetic standard, which had in fact nothing whatsoever to say on the topic. In consequence of this gap, existing implementations of this language implement that feature with different semantics. Poking around, I found a smaller but similar hole in SQL, and similar issues in other languages. There was no existing specification that any of these could refer to. So I set out to write one. Having seen other problems in standards caused by definitions that had not been adequately proof-read, I decided that I wanted a specification that had - been type-checked - had been tested reasonably thoroughly Since I'm writing in this mailing list, you can guess what I thought was a good way to do this: I wrote the specification in quite direct Haskell, putting effort into clarity at the expense of efficiency, and I used QuickCheck to test the specification. I still don't know whether to be pleased that QuickCheck found mistakes -- demonstrating my point that specifications need to be checked thoroughly -- or ashamed that I'm still making such mistakes. My problem: I can't get this published. The backhanded compliment: the last reviewer excoriated me for having too much pseudocode in my paper. (Despite the paper stating explicitly that ALL code in the paper was real code that had been executed.) You got it: Haskell doesn't look like a "real" programming language, but like something written for human comprehension during design. The dilemma: what I want to do is to tell people working on standards that we NEED to have machine-checked specifications and that we HAVE the technology to write such specifications and test them (oh and by the way here's this specification I wrote to fill that gap). But people who read Haskell well enough to read my specification don't need to be persuaded of this, and in all honesty, could write the specification for themselves if it occurred to them. Yet the people who do need to be told that there is a much better way to write standards than say the ECMAScript way don't read Haskell, and won't be interested in learning to do so until they've been persuaded... So where would _you_ send a case study on machine-checked specification? From rahulmutt at gmail.com Thu Oct 20 03:22:03 2016 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Thu, 20 Oct 2016 08:52:03 +0530 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: You core problem has existed since 1994 :) It'd a bit disappointing that we've only made so much progress in the psychological perspective in 26 years. Quoting [1], a study on prototyping software with different languages: One observer described the solution as “cute but not extensible” > (para-phrasing); this comment slipped its way into an initial draft of the > final report, which described the Haskell prototype as being “too cute for > its own good” (the phrase was later removed after objection by the first > author of this paper). > We mention these responses because they must be anticipated in the future. > If functional languages are to become more widely used, various > sociological and psychological barriers must be overcome. As a community we > should be aware of these barriers and realize that they will not disappear > overnight. People only like solutions when they have a deep problem needing to be solved. Giving Haskell to people who are already happy with the status quo will not turn any heads. Think about the deep problem that can solved by machine-checked specifications and who would be interested. Sorry for the vague answer, but it's the best I can give right now. Hope that helps, Rahul [1] http://www.cse.iitk.ac.in/users/karkare/courses/2010/cs653/Papers/hudak_haskell_sw_prototype.pdf (Thanks to John Hughes for informing us of this paper at FunctionalConf 2016.) On Thu, Oct 20, 2016 at 6:27 AM, Richard A. O'Keefe wrote: > TL;DR - Haskell mistaken for pseudo-code, a case study on machine- > checked specification for use in standards can't or won't be read > by the people who need it (who aren't the people in this mailing list). > > A couple of years ago, while trying to implement a programming language > with >30 years of use and an actual ANSI standard, I encountered a gap > in the standard where an aspect of arithmetic was referred to the > Language Independent Arithmetic standard, which had in fact nothing > whatsoever to say on the topic. In consequence of this gap, existing > implementations of this language implement that feature with different > semantics. Poking around, I found a smaller but similar hole in SQL, > and similar issues in other languages. > > There was no existing specification that any of these could refer to. > So I set out to write one. Having seen other problems in standards > caused by definitions that had not been adequately proof-read, > I decided that I wanted a specification that had > - been type-checked > - had been tested reasonably thoroughly > > Since I'm writing in this mailing list, you can guess what I thought > was a good way to do this: I wrote the specification in quite direct > Haskell, putting effort into clarity at the expense of efficiency, > and I used QuickCheck to test the specification. I still don't know > whether to be pleased that QuickCheck found mistakes -- demonstrating > my point that specifications need to be checked thoroughly -- or > ashamed that I'm still making such mistakes. > > My problem: I can't get this published. > > The backhanded compliment: the last reviewer excoriated me > for having too much pseudocode in my paper. (Despite the paper > stating explicitly that ALL code in the paper was real code that > had been executed.) You got it: Haskell doesn't look like a "real" > programming language, but like something written for human > comprehension during design. > > The dilemma: what I want to do is to tell people working > on standards that we NEED to have machine-checked specifications > and that we HAVE the technology to write such specifications and > test them (oh and by the way here's this specification I wrote to > fill that gap). But people who read Haskell well enough to read > my specification don't need to be persuaded of this, and in all > honesty, could write the specification for themselves if it > occurred to them. Yet the people who do need to be told that there > is a much better way to write standards than say the ECMAScript way > don't read Haskell, and won't be interested in learning to do so > until they've been persuaded... > > So where would _you_ send a case study on machine-checked specification? > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdammers at gmail.com Thu Oct 20 06:07:43 2016 From: tdammers at gmail.com (Tobias Dammers) Date: Thu, 20 Oct 2016 08:07:43 +0200 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: Consider this: the most popular programming languages overall are (arguably) PHP and Java, despite being almost half a century behind the state of the art of programming language design in many ways. Ask yourself why that is (and no, I haven't fully figured this one out myself either). On Oct 20, 2016 5:22 AM, "Rahul Muttineni" wrote: > You core problem has existed since 1994 :) It'd a bit disappointing that > we've only made so much progress in the psychological perspective in 26 > years. > > Quoting [1], a study on prototyping software with different languages: > > One observer described the solution as “cute but not extensible” >> (para-phrasing); this comment slipped its way into an initial draft of the >> final report, which described the Haskell prototype as being “too cute for >> its own good” (the phrase was later removed after objection by the first >> author of this paper). >> We mention these responses because they must be anticipated in the >> future. If functional languages are to become more widely used, various >> sociological and psychological barriers must be overcome. As a community we >> should be aware of these barriers and realize that they will not disappear >> overnight. > > > People only like solutions when they have a deep problem needing to be > solved. Giving Haskell to people who are already happy with the status quo > will not turn any heads. Think about the deep problem that can solved by > machine-checked specifications and who would be interested. Sorry for the > vague answer, but it's the best I can give right now. > > Hope that helps, > > Rahul > > [1] http://www.cse.iitk.ac.in/users/karkare/courses/2010/ > cs653/Papers/hudak_haskell_sw_prototype.pdf > (Thanks to John Hughes for informing us of this paper at > FunctionalConf 2016.) > > On Thu, Oct 20, 2016 at 6:27 AM, Richard A. O'Keefe > wrote: > >> TL;DR - Haskell mistaken for pseudo-code, a case study on machine- >> checked specification for use in standards can't or won't be read >> by the people who need it (who aren't the people in this mailing list). >> >> A couple of years ago, while trying to implement a programming language >> with >30 years of use and an actual ANSI standard, I encountered a gap >> in the standard where an aspect of arithmetic was referred to the >> Language Independent Arithmetic standard, which had in fact nothing >> whatsoever to say on the topic. In consequence of this gap, existing >> implementations of this language implement that feature with different >> semantics. Poking around, I found a smaller but similar hole in SQL, >> and similar issues in other languages. >> >> There was no existing specification that any of these could refer to. >> So I set out to write one. Having seen other problems in standards >> caused by definitions that had not been adequately proof-read, >> I decided that I wanted a specification that had >> - been type-checked >> - had been tested reasonably thoroughly >> >> Since I'm writing in this mailing list, you can guess what I thought >> was a good way to do this: I wrote the specification in quite direct >> Haskell, putting effort into clarity at the expense of efficiency, >> and I used QuickCheck to test the specification. I still don't know >> whether to be pleased that QuickCheck found mistakes -- demonstrating >> my point that specifications need to be checked thoroughly -- or >> ashamed that I'm still making such mistakes. >> >> My problem: I can't get this published. >> >> The backhanded compliment: the last reviewer excoriated me >> for having too much pseudocode in my paper. (Despite the paper >> stating explicitly that ALL code in the paper was real code that >> had been executed.) You got it: Haskell doesn't look like a "real" >> programming language, but like something written for human >> comprehension during design. >> >> The dilemma: what I want to do is to tell people working >> on standards that we NEED to have machine-checked specifications >> and that we HAVE the technology to write such specifications and >> test them (oh and by the way here's this specification I wrote to >> fill that gap). But people who read Haskell well enough to read >> my specification don't need to be persuaded of this, and in all >> honesty, could write the specification for themselves if it >> occurred to them. Yet the people who do need to be told that there >> is a much better way to write standards than say the ECMAScript way >> don't read Haskell, and won't be interested in learning to do so >> until they've been persuaded... >> >> So where would _you_ send a case study on machine-checked specification? >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > > > > > -- > Rahul Muttineni > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.farkasdyck at gmail.com Thu Oct 20 06:30:55 2016 From: m.farkasdyck at gmail.com (M Farkas-Dyck) Date: Wed, 19 Oct 2016 22:30:55 -0800 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: On 19/10/2016, Tobias Dammers wrote: > Consider this: the most popular programming languages overall are > (arguably) PHP and Java, despite being almost half a century behind the > state of the art of programming language design in many ways. Ask yourself > why that is (and no, I haven't fully figured this one out myself either). Some people like to be tied up and whipped; i think it's the same phenomenon. From tdammers at gmail.com Thu Oct 20 06:47:14 2016 From: tdammers at gmail.com (Tobias Dammers) Date: Thu, 20 Oct 2016 08:47:14 +0200 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: All joking aside, I'm afraid we must face the reality that the kind of programmer who is willing to go through great lengths to get reasonably correct and maintainable code is the exception, not the norm. Even in fields where extreme levels of certainty and confidence are required (think avionics, medical devices, nuclear installation, weapons), the preferred strategy still seems to be rigid processes and lots of manual labor. Obviously events like the Ariane 5 incidents aren't helpful there. On Oct 20, 2016 8:30 AM, "M Farkas-Dyck" wrote: > On 19/10/2016, Tobias Dammers wrote: > > Consider this: the most popular programming languages overall are > > (arguably) PHP and Java, despite being almost half a century behind the > > state of the art of programming language design in many ways. Ask > yourself > > why that is (and no, I haven't fully figured this one out myself either). > > Some people like to be tied up and whipped; i think it's the same > phenomenon. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at kent.ac.uk Thu Oct 20 06:49:33 2016 From: S.J.Thompson at kent.ac.uk (Simon Thompson) Date: Thu, 20 Oct 2016 07:49:33 +0100 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: Hi Richard - are you aware of the work of Philippa Gardner and her colleagues on formalising ECMAScript? http://psvg.doc.ic.ac.uk/research/javascript.html Exciting stuff! They’ve certainly had their work published. Kind regards, Simon > On 20 Oct 2016, at 01:57, Richard A. O'Keefe wrote: > > TL;DR - Haskell mistaken for pseudo-code, a case study on machine- > checked specification for use in standards can't or won't be read > by the people who need it (who aren't the people in this mailing list). > > A couple of years ago, while trying to implement a programming language > with >30 years of use and an actual ANSI standard, I encountered a gap > in the standard where an aspect of arithmetic was referred to the > Language Independent Arithmetic standard, which had in fact nothing > whatsoever to say on the topic. In consequence of this gap, existing > implementations of this language implement that feature with different > semantics. Poking around, I found a smaller but similar hole in SQL, > and similar issues in other languages. > > There was no existing specification that any of these could refer to. > So I set out to write one. Having seen other problems in standards > caused by definitions that had not been adequately proof-read, > I decided that I wanted a specification that had > - been type-checked > - had been tested reasonably thoroughly > > Since I'm writing in this mailing list, you can guess what I thought > was a good way to do this: I wrote the specification in quite direct > Haskell, putting effort into clarity at the expense of efficiency, > and I used QuickCheck to test the specification. I still don't know > whether to be pleased that QuickCheck found mistakes -- demonstrating > my point that specifications need to be checked thoroughly -- or > ashamed that I'm still making such mistakes. > > My problem: I can't get this published. > > The backhanded compliment: the last reviewer excoriated me > for having too much pseudocode in my paper. (Despite the paper > stating explicitly that ALL code in the paper was real code that > had been executed.) You got it: Haskell doesn't look like a "real" > programming language, but like something written for human > comprehension during design. > > The dilemma: what I want to do is to tell people working > on standards that we NEED to have machine-checked specifications > and that we HAVE the technology to write such specifications and > test them (oh and by the way here's this specification I wrote to > fill that gap). But people who read Haskell well enough to read > my specification don't need to be persuaded of this, and in all > honesty, could write the specification for themselves if it > occurred to them. Yet the people who do need to be told that there > is a much better way to write standards than say the ECMAScript way > don't read Haskell, and won't be interested in learning to do so > until they've been persuaded... > > So where would _you_ send a case study on machine-checked specification? > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson at kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt From imantc at gmail.com Thu Oct 20 06:53:54 2016 From: imantc at gmail.com (Imants Cekusins) Date: Thu, 20 Oct 2016 08:53:54 +0200 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: > the kind of programmer who is willing to ... People choosing language / framework may not be the same people who write code. People who begin coding a project may not be the same people who complete and maintain it. All these groups (choose, begin, complete, maintain) may develop different preferences for languages / frameworks. However the 'choose' group will affect language usage stats more than the other groups. ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdammers at gmail.com Thu Oct 20 06:57:52 2016 From: tdammers at gmail.com (Tobias Dammers) Date: Thu, 20 Oct 2016 08:57:52 +0200 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: Very true, however, the "choose" group has to take the rest of the chain into account. The fact that certain types if programmers gravitate towards certain technologies, and the reality of a limited hiring pool and a high demand, means that programmers' tech preferences ARE an important factor, even when plenty of programmers work with a stack they wouldn't necessarily pick themselves. On Oct 20, 2016 8:54 AM, "Imants Cekusins" wrote: > > the kind of programmer who is willing to ... > > People choosing language / framework may not be the same people who write > code. People who begin coding a project may not be the same people who > complete and maintain it. > > All these groups (choose, begin, complete, maintain) may develop different > preferences for languages / frameworks. However the 'choose' group will > affect language usage stats more than the other groups. > ​ > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imantc at gmail.com Thu Oct 20 07:32:50 2016 From: imantc at gmail.com (Imants Cekusins) Date: Thu, 20 Oct 2016 09:32:50 +0200 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: > the "choose" group has to take the rest of the chain into account Choosers rarely consult and listen to the other groups even when those other groups are present and voice their preferences. Job ads where employer is open to suggestions about the language are not common. Preferences may differ. The same person may prefer different language depending on which group they are in. If the same group and people within that group did all the stages: choose, begin, complete, maintain, they'd make a more balanced decision. ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon.fairbairn at cl.cam.ac.uk Thu Oct 20 08:50:50 2016 From: jon.fairbairn at cl.cam.ac.uk (Jon Fairbairn) Date: Thu, 20 Oct 2016 09:50:50 +0100 Subject: [Haskell-cafe] A backhanded compliment and a dilemma References: Message-ID: "Richard A. O'Keefe" writes: > TL;DR - Haskell mistaken for pseudo-code, a case study on machine- > checked specification for use in standards can't or won't be read > by the people who need it (who aren't the people in this mailing list). > My problem: I can't get this published. > > The backhanded compliment: the last reviewer excoriated me > for having too much pseudocode in my paper. (Despite the paper > stating explicitly that ALL code in the paper was real code that > had been executed.) You got it: Haskell doesn't look like a "real" > programming language, but like something written for human > comprehension during design. Just a shot in the dark: would it help to put all the braces and semicolons in explicitly? :-) -- Jón Fairbairn Jon.Fairbairn at cl.cam.ac.uk From jeremy.odonoghue at gmail.com Thu Oct 20 09:48:11 2016 From: jeremy.odonoghue at gmail.com (Jeremy O'Donoghue) Date: Thu, 20 Oct 2016 10:48:11 +0100 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: I suspect that there are few on this list who actually participate in significant standards bodies, so it is perhaps understandable that the replies so far tend towards "everyone uses language $FOO even though it sucks" and "people who choose the implementation language for a standard don't code it". I participate in standards development across multiple bodies, and (at least arguably) in an area where formal methods would be particularly helpful: standards focussed around device security . I hope my perspective will help. The first thing to understand is that most standards bodies have participation from a wide range of individuals and organisations. There is no standardised recruiting process or essential body of knowledge, although in practice the contributors to most standards at the working group level have a generally acknowledged level of expertise in the subject under standardisation. A huge part of the expectation of a member of a working group is that they are competent to review and comment intelligently on proposed changes and additions to the standard in question. This implies that the working group needs to use language (in the widest sense) that is accessible to all of its members. To be specific to my own standards participation, I work on a set of standards around the "Trusted Execution Environment". This is essentially a small security-oriented runtime designed to support cryptographic operations. It offers a standard API to developers of applications which use the environment. The system is described mainly in terms of the behaviour of its APIs, along with normative English language text describing those things that are not straightforwardly expressed by a function signature - I think this is a fairly common approach in standardisation, and as stated, essentially relies for its correctness on heroic efforts of checking text. The fact that we occasionally find "obvious" errors in text written some years ago is a testament to the fact that this process doesn't work very well, and I would suggest that thesis actually well understood in the standards world. However, and I say this with sadness, the availability of tooling to adequately support the development of machine checked specifications by traditional standards bodies is just not there. IN the case of the Trusted Execution Environment, which is one I know, the closest equivalent for which I am aware of a formal proof is SEL4 (you could certainly build a Trusted Execution Environment on top of SEL4, for example). Here's the problem: I am, I believe, a rare example of someone who knows (some) Haskell - hobbyist for over 10 years - and understand the implementation domain of TEE well. Frankly, the SEL4 proofs are beyond my ability to understand, and that means that it is beyond my ability to determine whether they really represent an adequate formal proof of the correctness of SEL4 (other than the fact that I know who did the work, of course, and as such I actually do trust it). I would posit that there is no-one else in the groups within which I work who has any more than superficial knowledge or understanding of Haskell, proof assistants and other important concepts. In effect, this means that the expert group within which I work would be unlikely to be competent to create a machine checked specification because even if, say, I were to do so, others would be unable to review this. Since it seems unlikely (even if it were desirable) that standards bodies work will be performed only by people with a PhD in type theory, the reality is that the mechanisms that exist today are inaccessible to us. Now, I want to be clear that I believe that the work done by e.g. the SEL4 team and others is hugely important in getting us to a place where this situation will change, since we are gradually building a set of "baseline" proofs around common behaviours, but it is important to consider that: - The proofs themselves contain little documentation to explain how they fit together, which makes them very inaccessible for those wishing to learn from or reuse them. The linked proof is for operation of the ARM architecture TCB - this is something I understand quite well in "everyday life" terms, but it is completely opaque to me as a proof. - The sub-proofs are not structured in a way which fosters reuse. There are multiple proof tools, each with their own quirks and language, and for which it appears that porting proofs between them is tricky. - Proof tools themselves are not user friendly. - The proofs are too far divorced from real implementation to make them helpful to implementers (who are usually the end customers of a specification). For what it's worth, some of us (like myself) do create our own Haskell/Ocaml/whatever models of behaviour for proposed new features, but we use these to inform our checking process, rather than as input to the standards process. In practice, many standards bodies are addressing the inadequacies of the "hand crafted, hand checked" specification model by attempting to create (usually Open Source) reference implementations that serve as a normative specification of system behaviour. For a number of reasons, I am not convinced that this will be much more successful than the current approach, but I do believe that it is indicative that the standards field is aware of the issues it faces with increasingly complex specifications, and is looking for solutions. The challenge for the academic community is to demonstrate that there is a better way which is reasonably accessible to practitioners. I would suggest that a fairly good starting point of practical use (as opposed to academic interest) is a well documented machine checked specification for the C programming language *with reasonably user-friendly tools to accompany it* (this part could be the commercial spin-off), as this is, in practice, what many programming languages and API specifications are built upon - at least until Rust proves itself comprehensively superior for the task. Best regards Jeremy On 20 October 2016 at 01:57, Richard A. O'Keefe wrote: > TL;DR - Haskell mistaken for pseudo-code, a case study on machine- > checked specification for use in standards can't or won't be read > by the people who need it (who aren't the people in this mailing list). > > A couple of years ago, while trying to implement a programming language > with >30 years of use and an actual ANSI standard, I encountered a gap > in the standard where an aspect of arithmetic was referred to the > Language Independent Arithmetic standard, which had in fact nothing > whatsoever to say on the topic. In consequence of this gap, existing > implementations of this language implement that feature with different > semantics. Poking around, I found a smaller but similar hole in SQL, > and similar issues in other languages. > > There was no existing specification that any of these could refer to. > So I set out to write one. Having seen other problems in standards > caused by definitions that had not been adequately proof-read, > I decided that I wanted a specification that had > - been type-checked > - had been tested reasonably thoroughly > > Since I'm writing in this mailing list, you can guess what I thought > was a good way to do this: I wrote the specification in quite direct > Haskell, putting effort into clarity at the expense of efficiency, > and I used QuickCheck to test the specification. I still don't know > whether to be pleased that QuickCheck found mistakes -- demonstrating > my point that specifications need to be checked thoroughly -- or > ashamed that I'm still making such mistakes. > > My problem: I can't get this published. > > The backhanded compliment: the last reviewer excoriated me > for having too much pseudocode in my paper. (Despite the paper > stating explicitly that ALL code in the paper was real code that > had been executed.) You got it: Haskell doesn't look like a "real" > programming language, but like something written for human > comprehension during design. > > The dilemma: what I want to do is to tell people working > on standards that we NEED to have machine-checked specifications > and that we HAVE the technology to write such specifications and > test them (oh and by the way here's this specification I wrote to > fill that gap). But people who read Haskell well enough to read > my specification don't need to be persuaded of this, and in all > honesty, could write the specification for themselves if it > occurred to them. Yet the people who do need to be told that there > is a much better way to write standards than say the ECMAScript way > don't read Haskell, and won't be interested in learning to do so > until they've been persuaded... > > So where would _you_ send a case study on machine-checked specification? > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.kodaj at gmail.com Thu Oct 20 12:29:18 2016 From: bence.kodaj at gmail.com (Bence Kodaj) Date: Thu, 20 Oct 2016 14:29:18 +0200 Subject: [Haskell-cafe] Why does [1.0, 3 ..4] contain 5? Message-ID: Silvio, Brandon, Ben: thanks for your instructive comments. Please allow me to dwell on this issue for 5 more minutes - I'm really interested in your opinion. Do you think it would be bad if we changed the definition of numericEnumFromThenTo e1 e2 e3 in such a way that the resulting list would never contain elements that are greater than e3 (in the e1 <= e2 case)? I'm asking because you can actually be bitten by this quirk of enumerated lists of Doubles (and Floats) even if you don't use floating-point literals explicitly. Consider the following (this is how I stumbled upon this issue actually): -- "odd semi-factorial" of n osf n = product [1, 3 .. n] -- "even semi-factorial" of n esf n = product [2, 4 .. n] Load this into GHCi and then go: *Main> osf 4 3 *Main> esf 4 8 *Main> osf 4 / esf 4 1.875 This last result looks total nonsense at first glance - it took me a good 15 minutes to realize what was going on. Of course, if I provide reasonable type signatures (e.g., Integer -> Integer) for osf and esf, then the problem goes away. Still, to me this strange result is a lot more surprising and unexpected than the fact that e.g., [1.0, 1.1 .. 2.3] would not actually contain (a value close to) 2.3 if we changed the definition of numericEnumFromThenTo. After all, it's fairly common knowledge that floating-point arithmetic is inexact and you're not supposed to rely on precise equality of two floating-point values, or you shouldn't be surprised if basic mathematical identities don't hold for floating-point values (e.g., 1 + 2*0.1 /= 1 + 0.1 + 0.1). Whereas the apparent "change" in the value of osf 4 from one line to the next above is more baffling I believe. But what do you think? Regards, Bence -------------- next part -------------- An HTML attachment was scrubbed... URL: From achudnov at gmail.com Thu Oct 20 12:29:17 2016 From: achudnov at gmail.com (Andrey Chudnov) Date: Thu, 20 Oct 2016 08:29:17 -0400 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: Simon, with JSCert it's proofs in Coq --- not quick-checked Haskell code. The level of assurance is different. Richard, if not a secret, which conference did you send it to? I think that a PL conference would find a paper on this topic interesting and readable, as long as it's well motivated and there are substantial findings/contributions. So, something like PLDI would be a good fit. Depending on the degree of contribution, POPL and ICFP might be other options. I don't know if publishing there would help your effort in enlightening the writers of standards for more widely used languages, though. /Andrey On 10/20/2016 02:49 AM, Simon Thompson wrote: > Hi Richard - are you aware of the work of Philippa Gardner and her colleagues on formalising ECMAScript? > > http://psvg.doc.ic.ac.uk/research/javascript.html > > Exciting stuff! They’ve certainly had their work published. > > Kind regards, > > Simon > > >> On 20 Oct 2016, at 01:57, Richard A. O'Keefe wrote: >> >> TL;DR - Haskell mistaken for pseudo-code, a case study on machine- >> checked specification for use in standards can't or won't be read >> by the people who need it (who aren't the people in this mailing list). >> >> A couple of years ago, while trying to implement a programming language >> with >30 years of use and an actual ANSI standard, I encountered a gap >> in the standard where an aspect of arithmetic was referred to the >> Language Independent Arithmetic standard, which had in fact nothing >> whatsoever to say on the topic. In consequence of this gap, existing >> implementations of this language implement that feature with different >> semantics. Poking around, I found a smaller but similar hole in SQL, >> and similar issues in other languages. >> >> There was no existing specification that any of these could refer to. >> So I set out to write one. Having seen other problems in standards >> caused by definitions that had not been adequately proof-read, >> I decided that I wanted a specification that had >> - been type-checked >> - had been tested reasonably thoroughly >> >> Since I'm writing in this mailing list, you can guess what I thought >> was a good way to do this: I wrote the specification in quite direct >> Haskell, putting effort into clarity at the expense of efficiency, >> and I used QuickCheck to test the specification. I still don't know >> whether to be pleased that QuickCheck found mistakes -- demonstrating >> my point that specifications need to be checked thoroughly -- or >> ashamed that I'm still making such mistakes. >> >> My problem: I can't get this published. >> >> The backhanded compliment: the last reviewer excoriated me >> for having too much pseudocode in my paper. (Despite the paper >> stating explicitly that ALL code in the paper was real code that >> had been executed.) You got it: Haskell doesn't look like a "real" >> programming language, but like something written for human >> comprehension during design. >> >> The dilemma: what I want to do is to tell people working >> on standards that we NEED to have machine-checked specifications >> and that we HAVE the technology to write such specifications and >> test them (oh and by the way here's this specification I wrote to >> fill that gap). But people who read Haskell well enough to read >> my specification don't need to be persuaded of this, and in all >> honesty, could write the specification for themselves if it >> occurred to them. Yet the people who do need to be told that there >> is a much better way to write standards than say the ECMAScript way >> don't read Haskell, and won't be interested in learning to do so >> until they've been persuaded... >> >> So where would _you_ send a case study on machine-checked specification? >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. > Simon Thompson | Professor of Logic and Computation > School of Computing | University of Kent | Canterbury, CT2 7NF, UK > s.j.thompson at kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From hasufell at hasufell.de Thu Oct 20 16:39:00 2016 From: hasufell at hasufell.de (Julian) Date: Thu, 20 Oct 2016 16:39:00 +0000 Subject: [Haskell-cafe] MTL vs Free-monads, what are your experiences In-Reply-To: References: <1476539382.1073.2.camel@joachim-breitner.de> <414986f8-e6e5-616c-3ff8-4892741064f2@hasufell.de> Message-ID: <71f2ac2a-4591-8bb6-3d30-addc6a93887f@hasufell.de> Damian Nadales: > I was thinking, besides the evaluation of performance, the simplicity > of the approach is also important ("developer time is more expensive > than CPU time" anyone?). Note that I said simple and not easy ;) > > I guess this aspect is a rather subjective one, but maybe there are > elements that can be intuitively quantified. Right now I'm playing > with free monads and MTL, to have an idea which one seems simpler to > me. > I care about simplicity too, but my interpretation might be slightly different. I also focus more on the things that a "normal" programmer cares about without going into esoteric use cases and technical properties. What I want wrt effects in haskell is: 1. easy to construct (MTL is built on top of transformers, so you get the same rat-tail there) 2. dynamic: most, of the time I don't care about the ordering of effect layers 3. no liftXY boilerplate code (freer [0] can do that to some extent with type inference) 4. being able to add and remove effects without it screwing too much with my internal API Basically, it must be declarative, intuitive and non-intrusive. For some reason, I lean more towards freer/EE here. But it's more like hope. We tried to make more complicated use of IO subtyping with freer and it turned out to be rather complicated [1]. Also, afair, edwardk wasn't particularly overwhelmed by the new EE approach [2]. [0] https://hackage.haskell.org/package/freer [1] https://gitlab.com/queertypes/freer/issues/7 [2] https://www.reddit.com/r/haskell/comments/387ex0/are_extensible_effects_a_complete_replacement_for/crt1pzm From yallop at gmail.com Thu Oct 20 16:54:09 2016 From: yallop at gmail.com (Jeremy Yallop) Date: Thu, 20 Oct 2016 17:54:09 +0100 Subject: [Haskell-cafe] PEPM 2017 Call for Poster Papers (submission deadline Tuesday 8th November) Message-ID: CALL FOR POSTERS Workshop on PARTIAL EVALUATION AND PROGRAM MANIPULATION (PEPM 2017) PEPM 2017 information: http://conf.researchr.org/home/PEPM-2017 Submissions: https://pepm17.hotcrp.com/ Paris, France, January 16th - 17th, 2017 (co-located with POPL 2017) PEPM is the premier forum for discussion of semantics-based program manipulation. The first ACM SIGPLAN PEPM symposium took place in 1991, and meetings have been held in affiliation with POPL every year since 2006. PEPM 2017 will be based on a broad interpretation of semantics-based program manipulation, reflecting the expanded scope of PEPM in recent years beyond the traditionally covered areas of partial evaluation and specialization. Posters ------- In order to maintain the dynamic and interactive nature of PEPM, we solicit submission of posters. Poster submissions are 2-page articles in ACM Proceedings style that present preliminary work (see "Submission guidelines" below for more details). If accepted, the work will be presented as part of an interactive poster session at PEPM. Scope ----- Topics of interest for PEPM 2017 include, but are not limited to: * Program and model manipulation techniques such as: supercompilation, partial evaluation, fusion, on-the-fly program adaptation, active libraries, program inversion, slicing, symbolic execution, refactoring, decompilation, and obfuscation. * Program analysis techniques that are used to drive program/model manipulation such as: abstract interpretation, termination checking, binding-time analysis, constraint solving, type systems, automated testing and test case generation. * Techniques that treat programs/models as data objects including metaprogramming, generative programming, embedded domain-specific languages, program synthesis by sketching and inductive programming, staged computation, and model-driven program generation and transformation. * Application of the above techniques including case studies of program manipulation in real-world (industrial, open-source) projects and software development processes, descriptions of robust tools capable of effectively handling realistic applications, benchmarking. Examples of application domains include legacy program understanding and transformation, DSL implementations, visual languages and end-user programming, scientific computing, middleware frameworks and infrastructure needed for distributed and web-based applications, embedded and resource-limited computation, and security. This list of categories is not exhaustive, and we encourage submissions describing applications of semantics-based program manipulation techniques in new domains. If you have a question as to whether a potential submission is within the scope of the workshop, please contact the programme chairs. Submission guidelines --------------------- * Posters should describe work relevant to the PEPM community, and must not exceed 2 pages in ACM Proceedings style. We invite poster submissions that present early work not yet ready for submission to a conference or journal, identify new research problems, showcase tools and technologies developed by the author(s), or describe student research projects. If accepted, the work will be presented as part of an interactive poster session at PEPM. At least one author of each accepted contribution must attend the workshop and present the work. Student participants with accepted poster papers can apply for a SIGPLAN PAC grant to help cover travel expenses and other support. PAC also offers other support, such as for child-care expenses during the meeting or for travel costs for companions of SIGPLAN members with physical disabilities, as well as for travel from locations outside of North America and Europe. For details on the PAC programme, see its web page. Publication ----------- Posters will appear along with accepted papers in formal proceedings published by ACM Press and in the ACM Digital Library. Keynote ------- Neil Jones (DIKU) will give the PEPM keynote talk, titled Compiling Untyped Lambda Calculus to Lower-level Code by Game Semantics and Partial Evaluation Submission ---------- Posters should be submitted electronically via HotCRP. https://pepm17.hotcrp.com/ Authors using LaTeX to prepare their submissions should use the new improved SIGPLAN proceedings style, and specifically the sigplanconf.cls 9pt template. Important Dates --------------- * Poster submission : Tuesday 8th November 2016 * Author notification : Friday 18th November 2016 * Camera ready : Monday 28th November 2016 * Workshop : Monday 16th - Tuesday 17th January 2017 The proceedings will be published 2 weeks pre-conference. AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library after the conference is over, the official publication date remains the first day of the conference.). PEPM'17 Programme Committee --------------------------- Elvira Albert (Complutense University of Madrid, Spain) Don Batory (University of Texas at Austin, USA) Martin Berger (University of Sussex, UK) Sebastian Erdweg (TU Delft, Netherlands) Andrew Farmer (Facebook, USA) Matthew Flatt (University of Utah, USA) John Gallagher (Roskilde University, Denmark) Robert Glück (DIKU, Denmark) Jurriaan Hage (Utrecht University, Netherlands) Zhenjiang Hu (National Institute of Informatics, Japan) Yukiyoshi Kameyama (University of Tsukuba, Japan) Ilya Klyuchnikov (Facebook, UK) Huiqing Li (EE, UK) Annie Liu (Stony Brook University, USA) Markus Püschel (ETH Zurich, Switzerland) Ryosuke SATO (University of Tokyo, Japan) Sven-Bodo Scholz (Heriot-Watt University, UK) Ulrik Schultz (co-chair) (University of Southern Denmark) Ilya Sergey (University College London, UK) Chung-chieh Shan (Indiana University, USA) Tijs van der Storm (Centrum Wiskunde & Informatica, Netherlands) Jeremy Yallop (co-chair) (University of Cambridge, UK) -- Ulrik Pagh Schultz, Associate Professor, University of Southern Denmark ups at mmmi.sdu.dk http://www.sdu.dk/ansat/ups +4565503570 From kc1956 at gmail.com Thu Oct 20 18:40:44 2016 From: kc1956 at gmail.com (KC) Date: Thu, 20 Oct 2016 11:40:44 -0700 Subject: [Haskell-cafe] I cannot seem to get the second aio command to work on Create a Game with Haskell hgamer3D.org Message-ID: I'm getting an error 403 permission denied aio http://www.hgamer3d.org/component/CreateProject arriccio is going to download and install the following files: -------------------------------------------------------------- (more license info can be obtained by using the "aio license" cmd) file: http://www.hgamer3d.org/downloads/lua-amd64-windows-1.0.0.tar.gz signing key: https://www.github.com/urs-of-the-backwoods.keys license: MIT License please confirm download with "yes": yes downloading: http://www.hgamer3d.org/downloads/lua-amd64-windows-1.0.0.tar.gz2016/10/20 11:23:56 cannot download urln: http://www.hgamer3d.org/downloads/lua-amd64-windows-1.0.0.tar.gz, http error: 403 -- -- Sent from an expensive device which will be obsolete in a few months! :D Casey -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Thu Oct 20 19:30:08 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Thu, 20 Oct 2016 21:30:08 +0200 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: <7fd1873f-d72f-950d-85a3-8eaf90cbc4d1@durchholz.org> Am 20.10.2016 um 08:30 schrieb M Farkas-Dyck: > On 19/10/2016, Tobias Dammers wrote: >> Consider this: the most popular programming languages overall are >> (arguably) PHP and Java, despite being almost half a century behind the >> state of the art of programming language design in many ways. Ask yourself >> why that is (and no, I haven't fully figured this one out myself either). > > Some people like to be tied up and whipped; i think it's the same phenomenon. That is neither funny nor accurate. Not funny for those among us who happen to program in one of these languages, for whatever reason. And wildly inaccurable because while Haskell is an excellent language, the same cannot be said about several relevant parts of its ecosystem. In fact from the perspective of a Java programmer, mentions of "Cabal hell" sounds just like "some people like being tortured", which obviously isn't true either. From jo at durchholz.org Thu Oct 20 19:58:04 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Thu, 20 Oct 2016 21:58:04 +0200 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: Am 20.10.2016 um 09:32 schrieb Imants Cekusins: >> the "choose" group has to take the rest of the chain into account > > Choosers rarely consult and listen to the other groups even when those > other groups are present and voice their preferences. They consult with their architects if they have no personal expertise. If they have personal expertise, they usually *are* the architects. Architects tend to stick with established technology because switching is costly (loss of productivity because programmers have to relearn, loss of codebase that's incompatible with the new ecosystem, loss of tool integration that needs to be prepared for the new ecosystem, loss of programmer time who need to rewrite stuff), *plus* it is risky because nobody can guarantee that you will ever recoup the losses, let alone gain an edge. Just do the math: assuming a developer-year costs USD 100,000 (not too absurd in Western countries though there are obviously differences), a team of ten developers working for a year to embark on a new language is a full million dollars that needs to amortize. And you need to guarantee that the transition is done after a year, and that programmers will be more productive by 50%, and then it is still three years until that expense is amortized. You simply don't do that. You cannot (or at least should not) bet your company's existence on that kind of stuff, particularly if those guarantees are not available. In practice, drastic coding infrastructure changes tend to take decades. Large businesses (i.e. those who have the money to actually experiment with disruptive stuff) are currently in the process of migrating their applications from mainframes to webservices, and the typical timescale for that is in years, sometimes decades, even for them. Things change if you get a guarantee. If a competitor is able to make their business processes substantially more flexible, then there you have your guarantee, and then you'll find the employers who are willing to use^Wtry that disruptive technology. Webservices and enterprise buses were such changes, for example - and they are still migrating. > Job ads where > employer is open to suggestions about the language are not common. Well, changing the language in a running business process is a high-risk, dubious-reward proposition. You will have a hard time convincing anybody of trying that out. > If the same group and people within that group did all the stages: > choose, begin, complete, maintain, they'd make a more balanced decision. Talking as somebody who has seen both sides of the fence, I can say that these decisions cannot be made "more balanced". You have to balance risk against reward, all while taking constraints like developer availability, in-house ability to actually make use of the advantages, retraining costs and overall profitability into account. Going just by qualities of the language would be the unbalanced decision, I'd say. From ok at cs.otago.ac.nz Thu Oct 20 22:00:42 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Fri, 21 Oct 2016 11:00:42 +1300 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: <82ea1ff2-0d00-f196-f677-bb2e32958412@cs.otago.ac.nz> On 20/10/16 7:49 PM, Simon Thompson wrote: > Hi Richard - are you aware of the work of Philippa Gardner and her colleagues on formalising ECMAScript? > > http://psvg.doc.ic.ac.uk/research/javascript.html > > Exciting stuff! They’ve certainly had their work published. Oddly enough, I am currently studying the ECMAScript Internationalization API Specification (ECMA-402), and had been thinking (a) what kind of prehistoric weed are these people smoking? (b) I wonder if I could talk a student into trying to extract something machine-checkable from this? So I am very pleased to have that link. Thank you. (Again by coincidence, I am currently trying to learn Coq.) From ok at cs.otago.ac.nz Thu Oct 20 22:01:58 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Fri, 21 Oct 2016 11:01:58 +1300 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: References: Message-ID: <6881951b-b66b-9ee5-5cc7-975bdd25f16b@cs.otago.ac.nz> On 20/10/16 9:50 PM, Jon Fairbairn wrote: > Just a shot in the dark: would it help to put all the braces and > semicolons in explicitly? :-) ROTFLMAO From allbery.b at gmail.com Thu Oct 20 22:04:54 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 20 Oct 2016 18:04:54 -0400 Subject: [Haskell-cafe] A backhanded compliment and a dilemma In-Reply-To: <6881951b-b66b-9ee5-5cc7-975bdd25f16b@cs.otago.ac.nz> References: <6881951b-b66b-9ee5-5cc7-975bdd25f16b@cs.otago.ac.nz> Message-ID: On Thu, Oct 20, 2016 at 6:01 PM, Richard A. O'Keefe wrote: > On 20/10/16 9:50 PM, Jon Fairbairn wrote: > >> Just a shot in the dark: would it help to put all the braces and >> semicolons in explicitly? :-) >> > ROTFLMAO ...shortly to be followed by https://twitter.com/UdellGames/status/788690145822306304 -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From siddu.druid at gmail.com Sat Oct 22 09:29:55 2016 From: siddu.druid at gmail.com (Siddharth Bhat) Date: Sat, 22 Oct 2016 09:29:55 +0000 Subject: [Haskell-cafe] Wiki account Request Message-ID: Hey, I'd like a wiki account to edit the Template Haskell pages (the links to the user guide are broken) preferred username: bollu Also, I think it's a little ridiculous to have this be a closed thing (I can't simply register to the wiki). Shouldn't the barrier to entry be _lowered_? Thanks, Siddharth -- Sending this from my phone, please excuse any typos! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Sat Oct 22 12:18:51 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Sat, 22 Oct 2016 13:18:51 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) Message-ID: Dear Haskell Cafe Subscribers on the recommendation of someone for whom I have great respect, I have just subscribed to this list, it having been suggested as being a good place for me to get feedback regarding a project that I have been working on. I am humbled by the level of discussion and it feels to be a very bold step for me to request anybody's time for my words. The linked document is a four-page work-in-progress summary: the length being stipulated, potential novelty being the other main requirement. Given the requirements, the summary necessarily glosses over some details and is not yet, I fear, completely correct. The conclusion is, more or less, the one at which I am aiming; the properties are, more or less, the ones that are needed. http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf The work arises from an investigation into functional programming syntax and semantics. The novelty seems to be there but there is too a question as to whether it is simply a gimmick. I try to suggest that it is not but, by that stage, there have been many assumptions so it is hard to be sure whether the suggestion is valid. If anyone has any comments, questions or suggestions, they would be gratefully received. Yours sincerely Rik Howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From malcolm.wallace at me.com Sat Oct 22 13:39:52 2016 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Sat, 22 Oct 2016 14:39:52 +0100 Subject: [Haskell-cafe] Wiki account Request In-Reply-To: References: Message-ID: On 22 Oct 2016, at 10:29, Siddharth Bhat wrote: > Also, I think it's a little ridiculous to have this be a closed thing (I can't simply register to the wiki). Shouldn't the barrier to entry be _lowered_? It is necessary to guard against spambots that auto-register themselves and then destroy the value of the wiki. Regards, Malcolm From capn.freako at gmail.com Sat Oct 22 13:42:14 2016 From: capn.freako at gmail.com (David Banas) Date: Sat, 22 Oct 2016 06:42:14 -0700 Subject: [Haskell-cafe] Request for help w/ Ex. 10 in *Arrows and Computation*. Message-ID: Hi all, I’m stuck in Ex. 10 of *Arrows and Computation*, by Ross Paterson: https://htmlpreview.github.io/?https://github.com/capn-freako/Haskell_Misc/blob/master/Arrows_and_Computation/Arrows_and_Computation.html#ex9 It seems to me that if, in the third step of my proof, I could convert the *left (pure fst)* into *left (first )*, then I could use the **distribution** axiom to convert *pure distr >>> left (first )* into *first (left ) >>> pure distr *. At that point, I would have (starting from the left side of my expression): *first (..) >>> first (..)*, and could apply the functor property of *first*, in order to simplify the expression. Thanks, -db -------------- next part -------------- An HTML attachment was scrubbed... URL: From jm at memorici.de Sat Oct 22 14:52:51 2016 From: jm at memorici.de (Jonn Mostovoy) Date: Sat, 22 Oct 2016 17:52:51 +0300 Subject: [Haskell-cafe] Off-the-shelf Configurable BFT In-Reply-To: References: Message-ID: Dear all, we're implementing a variant of cryptocurrency with deleged forgers. It is distributed in building ledger and decentralized in nodes participating jn forming transactions. For this we would need to have a BFT algorithm to govern the views on the network and deal with partitions. Implementation has to be industrial-strength and used in production. We're willing to contribute actively to such implementation development. We're looking at HoneyBadgerBFT as an ideal candidate, but it's not implemented in Haskell. The only BFT algorithm implementation we found is a toy RaftBFT implementation. If you have an opensource BFT implementation, or proprietary BFT implementation you would like to trade, please report those implementations here. If you need more details on our use case, don't hesitate to ask. Everything we do is open source and me or my colleagues will gladly answer any questions you might have. Sincerely yours, Jonn Mostovoy, Cardano SL Project Managet at IOHK | Serokell https://serokell.io https://iohk.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From leiva.steven at gmail.com Sat Oct 22 18:58:35 2016 From: leiva.steven at gmail.com (Steven Leiva) Date: Sat, 22 Oct 2016 14:58:35 -0400 Subject: [Haskell-cafe] Left Associativity and Precendence Message-ID: Hi folks, Haskell beginner here. I ran across the expression below (from the Haskell Programming From First Principles book), and I was having a hard time putting parenthesis around the expression in order to make the order of application more explicit. Here is the expression in question: *take 5 . filter odd . enumFrom $ 3* I realize that part of the hang-up that I was having was that I focused solely on the fact that function application is left-associative, and not on the fact that specific functions / operators can have a declared precedence. Once I realized that the $ operator has a declared precedence of 0, the expression took on the following form: *(take 5 . filter odd . enumFrom) $ (3)* Now, everything to the left of the $ operator is simply function application (whitespace) and the function composition operator. I made the same mistake again of ignoring the fact that particular functions / operators can have a declared precedence. Realizing that mistake, and that the function composition has a declared precedence of 9, then the expression took on this form: *((take 5) . (filter odd) . (enumFrom)) $ (3)* So, the general rule that I took away from this is that, when trying to imagine how an expression reduces to normal form, function application does indeed proceed from left to right, but we have to take into account the declared precedence of particular functions / operators. Is my thinking correct here? Are there any particular holes in this logic? I still don't think I'd be able to figure out "where the parenthesis go" if I was given a new expression with functions / operators with 3 or 4 different declared precedences. For a beginner, is that a huge problem, or is knowledge of the concept enough? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Sat Oct 22 19:35:40 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sat, 22 Oct 2016 15:35:40 -0400 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: Message-ID: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> Hi Rik, I'm unsure what to make of your proposal, as it's hard for me to glean out what you're proposing. Do you have some sample programs written in your proposed language? What is the language's grammar? What is its type system (stated in terms of inference rules)? Having these concrete descriptors of the language would be very helpful in assessing this work. Richard > On Oct 22, 2016, at 8:18 AM, Rik Howard wrote: > > Dear Haskell Cafe Subscribers > > on the recommendation of someone for whom I have great respect, I have just subscribed to this list, it having been suggested as being a good place for me to get feedback regarding a project that I have been working on. I am humbled by the level of discussion and it feels to be a very bold step for me to request anybody's time for my words. > > The linked document is a four-page work-in-progress summary: the length being stipulated, potential novelty being the other main requirement. Given the requirements, the summary necessarily glosses over some details and is not yet, I fear, completely correct. The conclusion is, more or less, the one at which I am aiming; the properties are, more or less, the ones that are needed. > > http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf > > The work arises from an investigation into functional programming syntax and semantics. The novelty seems to be there but there is too a question as to whether it is simply a gimmick. I try to suggest that it is not but, by that stage, there have been many assumptions so it is hard to be sure whether the suggestion is valid. If anyone has any comments, questions or suggestions, they would be gratefully received. > > Yours sincerely > Rik Howard > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Sat Oct 22 19:48:22 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Sat, 22 Oct 2016 21:48:22 +0200 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: Message-ID: <27b68252-ea31-3e3a-c177-04763e645e75@durchholz.org> Am 22.10.2016 um 14:18 schrieb Rik Howard: > Dear Haskell Cafe Subscribers > > on the recommendation of someone for whom I have great respect, I have > just subscribed to this list, it having been suggested as being a good > place for me to get feedback regarding a project that I have been > working on. I am humbled by the level of discussion and it feels to be > a very bold step for me to request anybody's time for my words. > > The linked document is a four-page work-in-progress summary: the length > being stipulated, potential novelty being the other main requirement. > Given the requirements, the summary necessarily glosses over some > details and is not yet, I fear, completely correct. It is a programme for designing a programming language. It is leaving out a number of central issues: How to approach modularity, whether it should have opaque types (and why), whether there should be subtypes or not, how the type system is supposed to deal with arithmetic which has almost-compatible integer types and floating-point types. That's just off the top of my head, I am pretty sure that there are other issues. It is hard to discuss merits or problems at this stage, since all of these issues tend to influence each other. > The work arises from an investigation into functional programming syntax > and semantics. The novelty seems to be there but there is too a > question as to whether it is simply a gimmick. I try to suggest that it > is not but, by that stage, there have been many assumptions so it is > hard to be sure whether the suggestion is valid. If anyone has any > comments, questions or suggestions, they would be gratefully received. One thing I have heard is that effects, subtypes and type system soundness do not mix well. Subtypes are too useful to ignore, unsound types systems are not worth the effort, so I find it a bit surprising that the paper has nothing to say about the issue. Just my 2c. Regards, Jo From chneukirchen at gmail.com Sat Oct 22 19:54:56 2016 From: chneukirchen at gmail.com (Christian Neukirchen) Date: Sat, 22 Oct 2016 21:54:56 +0200 Subject: [Haskell-cafe] Munich Haskell Meeting, 2016-10-25 @ 19:30 Message-ID: <87d1is17n3.fsf@gmail.com> Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Tuesday, October 25 at **Augustiner-Gaststätte Rumpler (Baumstraße 21)** at 19h30. Please note the new location! For details see here: http://muenchen.haskell.bayern/dates.html If you plan to join, please add yourself quickly to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-oct-2016/ Everybody is welcome! cu, -- Christian Neukirchen http://chneukirchen.org From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Oct 22 21:35:10 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 22 Oct 2016 22:35:10 +0100 Subject: [Haskell-cafe] Left Associativity and Precendence In-Reply-To: References: Message-ID: <20161022213510.GH30310@weber> On Sat, Oct 22, 2016 at 02:58:35PM -0400, Steven Leiva wrote: > I still don't think I'd be able to figure out "where the parenthesis go" if > I was given a new expression with functions / operators with 3 or 4 > different declared precedences. For a beginner, is that a huge problem, or > is knowledge of the concept enough? If precedence is not intuitively clear then its badly written code. It's not the fault of the reader. From ruben.astud at gmail.com Sun Oct 23 05:51:39 2016 From: ruben.astud at gmail.com (Ruben Astudillo) Date: Sun, 23 Oct 2016 02:51:39 -0300 Subject: [Haskell-cafe] Wiki account Request In-Reply-To: References: Message-ID: <054de80e-ff20-1245-4270-fa6f68f1627e@gmail.com> On 22/10/16 10:39, Malcolm Wallace wrote: > > On 22 Oct 2016, at 10:29, Siddharth Bhat wrote: >> Also, I think it's a little ridiculous to have this be a closed thing >> (I can't simply register to the wiki). Shouldn't the barrier to entry >> be _lowered_? > > It is necessary to guard against spambots that auto-register > themselves and then destroy the value of the wiki. > > Regards, > Malcolm As currently stands, too little people can even edit the wiki know, those are humans! you know? :-D . Now in all seriousness, how do other wikis handle this issue? I know wikipedia filter blocks of IP & records who-did-who to see bad behaving actors, maybe this is too much for our case? what does for example the archwiki in this regard? (I will ask them) -- Ruben From jo at durchholz.org Sun Oct 23 08:49:24 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Sun, 23 Oct 2016 10:49:24 +0200 Subject: [Haskell-cafe] Wiki account Request In-Reply-To: <054de80e-ff20-1245-4270-fa6f68f1627e@gmail.com> References: <054de80e-ff20-1245-4270-fa6f68f1627e@gmail.com> Message-ID: <69e24b46-3eb8-5ed0-66b9-36b1c594c223@durchholz.org> Am 23.10.2016 um 07:51 schrieb Ruben Astudillo: > Now in all seriousness, how do other > wikis handle this issue? Manual work. Such as manually checking Registration requests. Talking from experience with my own blog, I can say that antispam software catches 99.9% of spam but I still need to react to the remaining 0.05% false positives and 0.05% false negatives. I need to do that promptly, else the real comments will take too much time to appear, frustrating the authors. My workload is "once per month or less", but that's likely because my blog is virtually unknown. From rik at dcs.bbk.ac.uk Sun Oct 23 08:52:51 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Sun, 23 Oct 2016 09:52:51 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> Message-ID: Hi Richard thank you for your reply. It is becoming apparent that my explanation can be made clearer. I'm investigating a language that takes something like core Haskell (a Lambda Calculus augmented with let blocks) to satisfy its pure function requirements but that takes a different approach when it comes to IO by employing procedures. For IO, a procedure returns only 'okay' or an error via the mechanism that a function would use for returning values; the non-deterministic values returned by the procedure are done so with variable parameters. For example, to define a procedure to echo once from standard in to out: echo = try (read string) (write string) error The value coming from standard-in is to be captured in the 'string' out-variable and so is available to be written to standard-out, the 'try' built-in being analogous to 'if'. Rough (inconsistent) examples exist; its grammar and type system are in a slightly better state but not yet written up properly. What I could quickly add as an appendix, I have but the notation needs to be made more standard for easier comparison. I am working on another paper that will address the need for a more-concrete and standard presentation. I hope that this goes some way to answering your questions; please say if it doesn't! Rik On 22 October 2016 at 20:35, Richard Eisenberg wrote: > Hi Rik, > > I'm unsure what to make of your proposal, as it's hard for me to glean out > what you're proposing. Do you have some sample programs written in your > proposed language? What is the language's grammar? What is its type system > (stated in terms of inference rules)? Having these concrete descriptors of > the language would be very helpful in assessing this work. > > Richard > > On Oct 22, 2016, at 8:18 AM, Rik Howard wrote: > > Dear Haskell Cafe Subscribers > > on the recommendation of someone for whom I have great respect, I have > just subscribed to this list, it having been suggested as being a good > place for me to get feedback regarding a project that I have been working > on. I am humbled by the level of discussion and it feels to be a very bold > step for me to request anybody's time for my words. > > The linked document is a four-page work-in-progress summary: the length > being stipulated, potential novelty being the other main requirement. > Given the requirements, the summary necessarily glosses over some details > and is not yet, I fear, completely correct. The conclusion is, more or > less, the one at which I am aiming; the properties are, more or less, the > ones that are needed. > > http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf > > > The work arises from an investigation into functional programming syntax > and semantics. The novelty seems to be there but there is too a question > as to whether it is simply a gimmick. I try to suggest that it is not but, > by that stage, there have been many assumptions so it is hard to be sure > whether the suggestion is valid. If anyone has any comments, questions or > suggestions, they would be gratefully received. > > Yours sincerely > Rik Howard > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 123.wizek at gmail.com Sun Oct 23 09:29:08 2016 From: 123.wizek at gmail.com (Wizek) Date: Sun, 23 Oct 2016 09:29:08 +0000 Subject: [Haskell-cafe] Obtaining location of subexpressions from Template Haskell In-Reply-To: References: Message-ID: Dear Alejandro, I'd also be interested in this. I haven't had the chance to use `{-# LINE ... #-}` pragmas before, but I do have some experience with TH. It might help us to get closer to a solution if you described what the transformation is actually doing, and/or include some examples of before and after transformation. It could at the very least help me understand more of the context. Thinking further about the question, it seems to me that as soon as our expression has already been transformed to `Exp` (or `Q Exp`) all positional and formatting information is lost. But maybe there is still hope. Have you considered using the 'haskell-src-meta' package? And/or using QuasiQuotes to transform "String -> Q Exp"? I have a vague memory that there is a haskell-parsing package or project out there (even if it is not specifically 'haskell-src-meta') that does retain the positional information that you are looking for when parsing from String (or `IsString a`). Once we do have line numbers, including them as pragmas seems fairly straightforward as `PragmaD (LineP Int String) :: Dec` Source: https://hackage.haskell.org/package/template-haskell-2.11.0.0/docs/Language-Haskell-TH.html#t:Dec Best Regards, Milán On Tue, Oct 18, 2016 at 2:53 PM Alejandro Serrano Mena wrote: > Dear Haskell-café, > I am trying to write a small TH module which manipulates Haskell code. > Basically, I have a function `transform :: Exp -> Q Exp` which I call this > way: > > g = $(transform [| map (+1) [1,2,3] |]) > Since this is a source-to-source transformation, I would like to generate > also {-# LINE ... #-} pragmas to point any error back to their original > location. > My question is: is there any way to obtain the location of sub-expressions > inside a `Exp` or `Q Exp`? > > Thanks in advance, > Alejandro > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at nicole-rauch.de Sun Oct 23 10:01:32 2016 From: info at nicole-rauch.de (Nicole Rauch) Date: Sun, 23 Oct 2016 12:01:32 +0200 Subject: [Haskell-cafe] Wiki account Request In-Reply-To: <054de80e-ff20-1245-4270-fa6f68f1627e@gmail.com> References: <054de80e-ff20-1245-4270-fa6f68f1627e@gmail.com> Message-ID: > Now in all seriousness, how do other > wikis handle this issue? O the site I co-developed (software craftsmanship community site with 1500+ users) everybody can register themselves (with OpenID / OAuth providers), and to edit the wiki, one needs to be logged in. So people have instant access if they want, and we did not have a single spambot case so far. Cheers, Nicole From sven.sauleau at xtuc.fr Sun Oct 23 11:28:12 2016 From: sven.sauleau at xtuc.fr (Sven SAULEAU) Date: Sun, 23 Oct 2016 11:28:12 +0000 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> Message-ID: Hi, Seems interesting but I have difficulties to understand. Some concret example would help. Your example could be done in Haskell using IO Monads. Wouldn’t procedures be inlined as a compilation optimisation in Haskell ? From my point of view your proposal will degrade code readability and wouldn’t improve efficiency of the execution. As said earlier, I may have misunderstood. Feel free to correct me. Regards, Sven On 23 Oct 2016, at 10:52, Rik Howard > wrote: Hi Richard thank you for your reply. It is becoming apparent that my explanation can be made clearer. I'm investigating a language that takes something like core Haskell (a Lambda Calculus augmented with let blocks) to satisfy its pure function requirements but that takes a different approach when it comes to IO by employing procedures. For IO, a procedure returns only 'okay' or an error via the mechanism that a function would use for returning values; the non-deterministic values returned by the procedure are done so with variable parameters. For example, to define a procedure to echo once from standard in to out: echo = try (read string) (write string) error The value coming from standard-in is to be captured in the 'string' out-variable and so is available to be written to standard-out, the 'try' built-in being analogous to 'if'. Rough (inconsistent) examples exist; its grammar and type system are in a slightly better state but not yet written up properly. What I could quickly add as an appendix, I have but the notation needs to be made more standard for easier comparison. I am working on another paper that will address the need for a more-concrete and standard presentation. I hope that this goes some way to answering your questions; please say if it doesn't! Rik On 22 October 2016 at 20:35, Richard Eisenberg > wrote: Hi Rik, I'm unsure what to make of your proposal, as it's hard for me to glean out what you're proposing. Do you have some sample programs written in your proposed language? What is the language's grammar? What is its type system (stated in terms of inference rules)? Having these concrete descriptors of the language would be very helpful in assessing this work. Richard On Oct 22, 2016, at 8:18 AM, Rik Howard > wrote: Dear Haskell Cafe Subscribers on the recommendation of someone for whom I have great respect, I have just subscribed to this list, it having been suggested as being a good place for me to get feedback regarding a project that I have been working on. I am humbled by the level of discussion and it feels to be a very bold step for me to request anybody's time for my words. The linked document is a four-page work-in-progress summary: the length being stipulated, potential novelty being the other main requirement. Given the requirements, the summary necessarily glosses over some details and is not yet, I fear, completely correct. The conclusion is, more or less, the one at which I am aiming; the properties are, more or less, the ones that are needed. http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf The work arises from an investigation into functional programming syntax and semantics. The novelty seems to be there but there is too a question as to whether it is simply a gimmick. I try to suggest that it is not but, by that stage, there have been many assumptions so it is hard to be sure whether the suggestion is valid. If anyone has any comments, questions or suggestions, they would be gratefully received. Yours sincerely Rik Howard _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. -- Sven SAULEAU - Xtuc Développeur Web contact at xtuc.fr 06 28 69 51 44 www.xtuc.fr https://www.linkedin.com/in/svensauleau -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Oct 23 11:58:09 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 23 Oct 2016 07:58:09 -0400 Subject: [Haskell-cafe] Off-the-shelf Configurable BFT In-Reply-To: References: Message-ID: There are no known to actually correct bft agreement algs implemented anywhere on the planet. In any language. On Saturday, October 22, 2016, Jonn Mostovoy wrote: > Dear all, > we're implementing a variant of cryptocurrency with deleged forgers. > It is distributed in building ledger and decentralized in nodes > participating jn forming transactions. > For this we would need to have a BFT algorithm to govern the views on the > network and deal with partitions. > Implementation has to be industrial-strength and used in production. > We're willing to contribute actively to such implementation development. > We're looking at HoneyBadgerBFT as an ideal candidate, but it's not > implemented in Haskell. > The only BFT algorithm implementation we found is a toy RaftBFT > implementation. > > If you have an opensource BFT implementation, or proprietary BFT > implementation you would like to trade, please report those implementations > here. > > If you need more details on our use case, don't hesitate to ask. > Everything we do is open source and me or my colleagues will gladly answer > any questions you might have. > > Sincerely yours, > Jonn Mostovoy, > Cardano SL Project Managet at IOHK | Serokell > > https://serokell.io https://iohk.io > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Sun Oct 23 12:10:24 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Sun, 23 Oct 2016 14:10:24 +0200 Subject: [Haskell-cafe] Off-the-shelf Configurable BFT In-Reply-To: References: Message-ID: <50cd129b-4a17-497b-21ad-97180e67ee57@durchholz.org> Am 23.10.2016 um 13:58 schrieb Carter Schonwald: > There are no known to actually correct bft agreement algs implemented > anywhere on the planet. In any language. This is not true for all variants of BFT. Of course, we don't know what specific kind of BFT the OP is trying to solve, and what trade-offs would be acceptable for him. From jo at durchholz.org Sun Oct 23 12:14:58 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Sun, 23 Oct 2016 14:14:58 +0200 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> Message-ID: <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> Am 23.10.2016 um 10:52 schrieb Rik Howard: > Hi Richard > > thank you for your reply. It is becoming apparent that my explanation > can be made clearer. I'm investigating a language that takes something > like core Haskell (a Lambda Calculus augmented with let blocks) to > satisfy its pure function requirements but that takes a different > approach when it comes to IO by employing procedures. Are you aware how "monadic IO" became the standard in Haskell? It was one of three competing approaches, and AFAIK one turned out to be less useful, and the other simply wasn't ready in time (so it might still be interesting to investigate). > For IO, a procedure returns only 'okay' or an error via the mechanism > that a function would use for returning values; the non-deterministic > values returned by the procedure are done so with variable parameters. What's the advantage here? Given the obvious strong disadvantage that it forces callers into an idiom that uses updatable data structures, the advantage better be compelling. > For example, to define a procedure to echo once from standard in to out: > > echo = try (read string) (write string) error > > The value coming from standard-in is to be captured in the 'string' > out-variable and so is available to be written to standard-out, the > 'try' built-in being analogous to 'if'. What is the analogy? That stuff is evaluated only on a by-need basis? That's already there in Haskell. > Rough (inconsistent) examples exist; its grammar and type system are in > a slightly better state but not yet written up properly. What I could > quickly add as an appendix, I have but the notation needs to be made > more standard for easier comparison. I am working on another paper that > will address the need for a more-concrete and standard presentation. I > hope that this goes some way to answering your questions; please say if > it doesn't! Right now I fail to see what's new&better in this. Regards, Jo From adam at bergmark.nl Sun Oct 23 13:27:02 2016 From: adam at bergmark.nl (Adam Bergmark) Date: Sun, 23 Oct 2016 13:27:02 +0000 Subject: [Haskell-cafe] Left Associativity and Precendence In-Reply-To: <20161022213510.GH30310@weber> References: <20161022213510.GH30310@weber> Message-ID: > If precedence is not intuitively clear then its badly written code. It's not the fault of the reader. I think this is a subjective choice. I will assume that a reader understands that `1 + 2 * 3 = 7'. If he doesn't I think he really ought to take the time and learn the precedences. Some precedences are obvious if you know the types involved; `a + b && c` can not equal `(a + b) && c` in a working Haskell program, here you just need to know the types to figure it out. A reader will never know the precedence of an expression with operators he is not familiar with but this shouldn't necessarily dictate how you write your code. This generalizes to a lot of decisions you need to make while programming. I'd recommend to start by learning precedences of operators in Prelude and then continuing with the rest of base. The combinations of function application (space), function composition (.), and ($) are very important to know by heart. If you come across an operator you aren't familiar with learn it by experimenting or looking it up. Try to parenthesize or unparenthesize, also for code you write yourself. You will build an intuition as you go. Cheers, Adam On Sat, Oct 22, 2016 at 11:35 PM Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk> wrote: > On Sat, Oct 22, 2016 at 02:58:35PM -0400, Steven Leiva wrote: > > I still don't think I'd be able to figure out "where the parenthesis go" > if > > I was given a new expression with functions / operators with 3 or 4 > > different declared precedences. For a beginner, is that a huge problem, > or > > is knowledge of the concept enough? > > If precedence is not intuitively clear then its badly written code. It's > not the fault of the reader. > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffbrown.the at gmail.com Sun Oct 23 20:15:53 2016 From: jeffbrown.the at gmail.com (Jeffrey Brown) Date: Sun, 23 Oct 2016 13:15:53 -0700 Subject: [Haskell-cafe] Why is Megaparsec treating these two operators differently? Message-ID: I used Text.Megaparsec.Expr to write a minimal (56 lines) 2-operator recursive parser. It's on github[1]. It outputs a binary tree of the following form: data AExpr = Var String | Pair AExpr AExpr deriving (Show) The operator table supplied to makeExprParser defines two operators, # and ##. ## binds after #, but both of them refer to the same function, the Pair constructor of the AExpr data type: aOperators :: [[Operator Parser AExpr]] aOperators = [ [ InfixN # symbol "#" *> pure (Pair) ] , [ InfixN # symbol "##" *> pure (Pair) ] ] The # operator works in isolation: > parseMaybe aExpr "a # b" Just (Pair (Var "a") (Var "b")) Parentheses work with the # operator: > parseMaybe aExpr "(a # b) # (c # d)" Just (Pair (Pair (Var "a") (Var "b")) -- whitespace added by hand (Pair (Var "c") (Var "d"))) And the # and ## operators work together as intended: > parseMaybe aExpr "a # b ## c # d" Just (Pair (Pair (Var "a") (Var "b")) -- whitespace added by hand (Pair (Var "c") (Var "d"))) But the ## operator in isolation does not parse! > parseMaybe aExpr "a ## b" Nothing [1] https://github.com/JeffreyBenjaminBrown/digraphs-with-text/blob/master/howto/megaparsec/experim.hs -- Jeff Brown | Jeffrey Benjamin Brown Website | Facebook | LinkedIn (I often miss messages here) | Github -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sun Oct 23 20:40:10 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sun, 23 Oct 2016 21:40:10 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> Message-ID: <20161023204010.GF4593@weber> On Sun, Oct 23, 2016 at 02:14:58PM +0200, Joachim Durchholz wrote: > Are you aware how "monadic IO" became the standard in Haskell? > It was one of three competing approaches, and AFAIK one turned out > to be less useful, and the other simply wasn't ready in time That's very tantalising. Can you link to a reference?! From allbery.b at gmail.com Sun Oct 23 21:03:06 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 23 Oct 2016 17:03:06 -0400 Subject: [Haskell-cafe] Why is Megaparsec treating these two operators differently? In-Reply-To: References: Message-ID: On Sun, Oct 23, 2016 at 4:15 PM, Jeffrey Brown wrote: > [ [ InfixN # symbol "#" *> pure (Pair) ] > , [ InfixN # symbol "##" *> pure (Pair) ] > ] > Combinator parsers can't rearrange themselves to do longest token matching. So the ## operator will take the first case, match against `symbol "#"` and aOperator will succeed; the the next token match will hit the unconsumed "#" and fail. If you place "##" first then it will match "##" but not "#", which would the match the second rule. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffbrown.the at gmail.com Sun Oct 23 21:38:51 2016 From: jeffbrown.the at gmail.com (Jeffrey Brown) Date: Sun, 23 Oct 2016 14:38:51 -0700 Subject: [Haskell-cafe] Why is Megaparsec treating these two operators differently? In-Reply-To: References: Message-ID: Thanks, Brandon! How did you know that? I changed them to "#1" and "#2" and now it works[1]. But before making that change, why would "a # b ## c # d" evaluate, even though "a ## b" would not? [1] https://github.com/JeffreyBenjaminBrown/digraphs-with-text/tree/master/howto/megaparsec The corrected file is called "experim.hs"; the old one, uncorrected, is called "experim.buggy.hs". On Sun, Oct 23, 2016 at 2:03 PM, Brandon Allbery wrote: > > On Sun, Oct 23, 2016 at 4:15 PM, Jeffrey Brown > wrote: > >> [ [ InfixN # symbol "#" *> pure (Pair) ] >> , [ InfixN # symbol "##" *> pure (Pair) ] >> ] >> > > Combinator parsers can't rearrange themselves to do longest token > matching. So the ## operator will take the first case, match against > `symbol "#"` and aOperator will succeed; the the next token match will hit > the unconsumed "#" and fail. If you place "##" first then it will match > "##" but not "#", which would the match the second rule. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -- Jeff Brown | Jeffrey Benjamin Brown Website | Facebook | LinkedIn (I often miss messages here) | Github -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Sun Oct 23 21:44:49 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Sun, 23 Oct 2016 22:44:49 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> Message-ID: Hi Sven thanks for the response. Some examples will definitely be produced. The syntax in the note is one that has been stripped down to support not much more than is needed for the note. On reflection, this does not make for a good presentation. A suggestion has been made to use the syntax of a standard reference to make easier a conceptual evaluation of the language. I think this could be a good way to clarify the ideas in the note; if they stand, a more-concrete (or practical) syntax could then be considered. The example could certainly be done in Haskell and many other languages. I'm not sure of the optimisation that you mention, so far the emphasis has been more on matters of semantics and, in the note, what effect the approach would have on a type system. You may be right about the readability, some examples would be useful. Coming soon! Best Rik On 23 October 2016 at 12:28, Sven SAULEAU wrote: > Hi, > > Seems interesting but I have difficulties to understand. Some concret > example would help. > > Your example could be done in Haskell using IO Monads. > > Wouldn’t procedures be inlined as a compilation optimisation in Haskell ? > > From my point of view your proposal will degrade code readability and > wouldn’t improve efficiency of the execution. > > As said earlier, I may have misunderstood. Feel free to correct me. > > Regards, > Sven > > On 23 Oct 2016, at 10:52, Rik Howard wrote: > > Hi Richard > > thank you for your reply. It is becoming apparent that my explanation can > be made clearer. I'm investigating a language that takes something like > core Haskell (a Lambda Calculus augmented with let blocks) to satisfy its > pure function requirements but that takes a different approach when it > comes to IO by employing procedures. > > For IO, a procedure returns only 'okay' or an error via the mechanism that > a function would use for returning values; the non-deterministic values > returned by the procedure are done so with variable parameters. For > example, to define a procedure to echo once from standard in to out: > > echo = try (read string) (write string) error > > The value coming from standard-in is to be captured in the 'string' > out-variable and so is available to be written to standard-out, the 'try' > built-in being analogous to 'if'. > > Rough (inconsistent) examples exist; its grammar and type system are in a > slightly better state but not yet written up properly. What I could > quickly add as an appendix, I have but the notation needs to be made more > standard for easier comparison. I am working on another paper that will > address the need for a more-concrete and standard presentation. I hope > that this goes some way to answering your questions; please say if it > doesn't! > > Rik > > > > > On 22 October 2016 at 20:35, Richard Eisenberg > wrote: > >> Hi Rik, >> >> I'm unsure what to make of your proposal, as it's hard for me to glean >> out what you're proposing. Do you have some sample programs written in your >> proposed language? What is the language's grammar? What is its type system >> (stated in terms of inference rules)? Having these concrete descriptors of >> the language would be very helpful in assessing this work. >> >> Richard >> >> On Oct 22, 2016, at 8:18 AM, Rik Howard wrote: >> >> Dear Haskell Cafe Subscribers >> >> on the recommendation of someone for whom I have great respect, I have >> just subscribed to this list, it having been suggested as being a good >> place for me to get feedback regarding a project that I have been working >> on. I am humbled by the level of discussion and it feels to be a very bold >> step for me to request anybody's time for my words. >> >> The linked document is a four-page work-in-progress summary: the length >> being stipulated, potential novelty being the other main requirement. >> Given the requirements, the summary necessarily glosses over some details >> and is not yet, I fear, completely correct. The conclusion is, more or >> less, the one at which I am aiming; the properties are, more or less, the >> ones that are needed. >> >> http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf >> >> >> The work arises from an investigation into functional programming syntax >> and semantics. The novelty seems to be there but there is too a question >> as to whether it is simply a gimmick. I try to suggest that it is not but, >> by that stage, there have been many assumptions so it is hard to be sure >> whether the suggestion is valid. If anyone has any comments, questions or >> suggestions, they would be gratefully received. >> >> Yours sincerely >> Rik Howard >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. >> >> >> > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > > > -- > > *Sven SAULEAU - Xtuc* > > > > Développeur Web > > contact at xtuc.fr > > 06 28 69 51 44 > > www.xtuc.fr > > https://www.linkedin.com/in/svensauleau > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sun Oct 23 21:50:16 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 23 Oct 2016 17:50:16 -0400 Subject: [Haskell-cafe] Why is Megaparsec treating these two operators differently? In-Reply-To: References: Message-ID: Aha. I had forgotten some details. If you want to have an operator that is a prefix of another operator in the > table, use the following (or similar) wrapper instead of plain symbol: > op n = (lexeme . try) (string n <* notFollowedBy punctuationChar) http://hackage.haskell.org/package/megaparsec-5.1.1/docs/Text-Megaparsec-Expr.html#v:makeExprParser So you actually need to be a little clever for those two operators to work; it's not as simple as I had recalled it (which would have been correct for a basic manual combinator setup). I am going to guess that something in there is not using `try` and silently consuming the extra "#", but I'd have to study the `makeExprParser` code in Megaparsec to be certain. On Sun, Oct 23, 2016 at 5:38 PM, Jeffrey Brown wrote: > Thanks, Brandon! How did you know that? > > I changed them to "#1" and "#2" and now it works[1]. > > But before making that change, why would "a # b ## c # d" evaluate, even > though "a ## b" would not? > > > [1] https://github.com/JeffreyBenjaminBrown/digraphs- > with-text/tree/master/howto/megaparsec > The corrected file is called "experim.hs"; the old one, uncorrected, is > called "experim.buggy.hs". > > On Sun, Oct 23, 2016 at 2:03 PM, Brandon Allbery > wrote: > >> >> On Sun, Oct 23, 2016 at 4:15 PM, Jeffrey Brown >> wrote: >> >>> [ [ InfixN # symbol "#" *> pure (Pair) ] >>> , [ InfixN # symbol "##" *> pure (Pair) ] >>> ] >>> >> >> Combinator parsers can't rearrange themselves to do longest token >> matching. So the ## operator will take the first case, match against >> `symbol "#"` and aOperator will succeed; the the next token match will hit >> the unconsumed "#" and fail. If you place "##" first then it will match >> "##" but not "#", which would the match the second rule. >> >> -- >> brandon s allbery kf8nh sine nomine >> associates >> allbery.b at gmail.com >> ballbery at sinenomine.net >> unix, openafs, kerberos, infrastructure, xmonad >> http://sinenomine.net >> > > > > -- > Jeff Brown | Jeffrey Benjamin Brown > Website | Facebook > | LinkedIn > (I often miss messages > here) | Github > -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Sun Oct 23 22:12:14 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Sun, 23 Oct 2016 23:12:14 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> Message-ID: All -- thanks for the feedback: it has been thought-provoking and enlightening. There are some obvious deficiencies that can be addressed, which is probably worth doing before further discussion. Please feel free to email me directly, otherwise I think I've taken enough space for the meantime on this valuable thread. Best, R. On 23 October 2016 at 22:44, Rik Howard wrote: > Hi Sven > > thanks for the response. Some examples will definitely be produced. The > syntax in the note is one that has been stripped down to support not much > more than is needed for the note. On reflection, this does not make for a > good presentation. A suggestion has been made to use the syntax of a > standard reference to make easier a conceptual evaluation of the language. > I think this could be a good way to clarify the ideas in the note; if they > stand, a more-concrete (or practical) syntax could then be considered. The > example could certainly be done in Haskell and many other languages. I'm > not sure of the optimisation that you mention, so far the emphasis has been > more on matters of semantics and, in the note, what effect the approach > would have on a type system. You may be right about the readability, some > examples would be useful. Coming soon! > > Best > Rik > > > On 23 October 2016 at 12:28, Sven SAULEAU wrote: > >> Hi, >> >> Seems interesting but I have difficulties to understand. Some concret >> example would help. >> >> Your example could be done in Haskell using IO Monads. >> >> Wouldn’t procedures be inlined as a compilation optimisation in Haskell ? >> >> From my point of view your proposal will degrade code readability and >> wouldn’t improve efficiency of the execution. >> >> As said earlier, I may have misunderstood. Feel free to correct me. >> >> Regards, >> Sven >> >> On 23 Oct 2016, at 10:52, Rik Howard wrote: >> >> Hi Richard >> >> thank you for your reply. It is becoming apparent that my explanation >> can be made clearer. I'm investigating a language that takes something >> like core Haskell (a Lambda Calculus augmented with let blocks) to satisfy >> its pure function requirements but that takes a different approach when it >> comes to IO by employing procedures. >> >> For IO, a procedure returns only 'okay' or an error via the mechanism >> that a function would use for returning values; the non-deterministic >> values returned by the procedure are done so with variable parameters. For >> example, to define a procedure to echo once from standard in to out: >> >> echo = try (read string) (write string) error >> >> The value coming from standard-in is to be captured in the 'string' >> out-variable and so is available to be written to standard-out, the 'try' >> built-in being analogous to 'if'. >> >> Rough (inconsistent) examples exist; its grammar and type system are in a >> slightly better state but not yet written up properly. What I could >> quickly add as an appendix, I have but the notation needs to be made more >> standard for easier comparison. I am working on another paper that will >> address the need for a more-concrete and standard presentation. I hope >> that this goes some way to answering your questions; please say if it >> doesn't! >> >> Rik >> >> >> >> >> On 22 October 2016 at 20:35, Richard Eisenberg >> wrote: >> >>> Hi Rik, >>> >>> I'm unsure what to make of your proposal, as it's hard for me to glean >>> out what you're proposing. Do you have some sample programs written in your >>> proposed language? What is the language's grammar? What is its type system >>> (stated in terms of inference rules)? Having these concrete descriptors of >>> the language would be very helpful in assessing this work. >>> >>> Richard >>> >>> On Oct 22, 2016, at 8:18 AM, Rik Howard wrote: >>> >>> Dear Haskell Cafe Subscribers >>> >>> on the recommendation of someone for whom I have great respect, I have >>> just subscribed to this list, it having been suggested as being a good >>> place for me to get feedback regarding a project that I have been working >>> on. I am humbled by the level of discussion and it feels to be a very bold >>> step for me to request anybody's time for my words. >>> >>> The linked document is a four-page work-in-progress summary: the length >>> being stipulated, potential novelty being the other main requirement. >>> Given the requirements, the summary necessarily glosses over some details >>> and is not yet, I fear, completely correct. The conclusion is, more or >>> less, the one at which I am aiming; the properties are, more or less, the >>> ones that are needed. >>> >>> http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf >>> >>> >>> The work arises from an investigation into functional programming syntax >>> and semantics. The novelty seems to be there but there is too a question >>> as to whether it is simply a gimmick. I try to suggest that it is not but, >>> by that stage, there have been many assumptions so it is hard to be sure >>> whether the suggestion is valid. If anyone has any comments, questions or >>> suggestions, they would be gratefully received. >>> >>> Yours sincerely >>> Rik Howard >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. >>> >>> >>> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. >> >> >> -- >> >> *Sven SAULEAU - Xtuc* >> >> >> >> Développeur Web >> >> contact at xtuc.fr >> >> 06 28 69 51 44 >> >> www.xtuc.fr >> >> https://www.linkedin.com/in/svensauleau >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kc1956 at gmail.com Sun Oct 23 22:23:40 2016 From: kc1956 at gmail.com (KC) Date: Sun, 23 Oct 2016 15:23:40 -0700 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: Message-ID: You may want to look at Call-By-Push-Value A Functional/Imperative Synthesis By Springer -- -- Sent from an expensive device which will be obsolete in a few months! :D Casey On Oct 22, 2016 5:19 AM, "Rik Howard" wrote: > Dear Haskell Cafe Subscribers > > on the recommendation of someone for whom I have great respect, I have > just subscribed to this list, it having been suggested as being a good > place for me to get feedback regarding a project that I have been working > on. I am humbled by the level of discussion and it feels to be a very bold > step for me to request anybody's time for my words. > > The linked document is a four-page work-in-progress summary: the length > being stipulated, potential novelty being the other main requirement. > Given the requirements, the summary necessarily glosses over some details > and is not yet, I fear, completely correct. The conclusion is, more or > less, the one at which I am aiming; the properties are, more or less, the > ones that are needed. > > http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf > > > The work arises from an investigation into functional programming syntax > and semantics. The novelty seems to be there but there is too a question > as to whether it is simply a gimmick. I try to suggest that it is not but, > by that stage, there have been many assumptions so it is hard to be sure > whether the suggestion is valid. If anyone has any comments, questions or > suggestions, they would be gratefully received. > > Yours sincerely > Rik Howard > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Mon Oct 24 04:53:58 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Mon, 24 Oct 2016 06:53:58 +0200 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: <20161023204010.GF4593@weber> References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> <20161023204010.GF4593@weber> Message-ID: <3db6342c-90aa-c308-ef0d-be01808d0e3e@durchholz.org> Am 23.10.2016 um 22:40 schrieb Tom Ellis: > On Sun, Oct 23, 2016 at 02:14:58PM +0200, Joachim Durchholz wrote: >> Are you aware how "monadic IO" became the standard in Haskell? >> It was one of three competing approaches, and AFAIK one turned out >> to be less useful, and the other simply wasn't ready in time > > That's very tantalising. Can you link to a reference?! I think it is in one or both of these: Tackling the Awkward Squad A History of Haskell: being lazy with class A retrospective on Haskell (It's been a decade since I last read them so they might be not exactly what you're after, but they should be close.) From sven.sauleau at xtuc.fr Mon Oct 24 05:35:16 2016 From: sven.sauleau at xtuc.fr (Sven SAULEAU) Date: Mon, 24 Oct 2016 05:35:16 +0000 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> Message-ID: Hi, Ok great. Thanks, Regards, Sven On 23 Oct 2016, at 23:44, Rik Howard > wrote: Hi Sven thanks for the response. Some examples will definitely be produced. The syntax in the note is one that has been stripped down to support not much more than is needed for the note. On reflection, this does not make for a good presentation. A suggestion has been made to use the syntax of a standard reference to make easier a conceptual evaluation of the language. I think this could be a good way to clarify the ideas in the note; if they stand, a more-concrete (or practical) syntax could then be considered. The example could certainly be done in Haskell and many other languages. I'm not sure of the optimisation that you mention, so far the emphasis has been more on matters of semantics and, in the note, what effect the approach would have on a type system. You may be right about the readability, some examples would be useful. Coming soon! Best Rik On 23 October 2016 at 12:28, Sven SAULEAU > wrote: Hi, Seems interesting but I have difficulties to understand. Some concret example would help. Your example could be done in Haskell using IO Monads. Wouldn’t procedures be inlined as a compilation optimisation in Haskell ? From my point of view your proposal will degrade code readability and wouldn’t improve efficiency of the execution. As said earlier, I may have misunderstood. Feel free to correct me. Regards, Sven On 23 Oct 2016, at 10:52, Rik Howard > wrote: Hi Richard thank you for your reply. It is becoming apparent that my explanation can be made clearer. I'm investigating a language that takes something like core Haskell (a Lambda Calculus augmented with let blocks) to satisfy its pure function requirements but that takes a different approach when it comes to IO by employing procedures. For IO, a procedure returns only 'okay' or an error via the mechanism that a function would use for returning values; the non-deterministic values returned by the procedure are done so with variable parameters. For example, to define a procedure to echo once from standard in to out: echo = try (read string) (write string) error The value coming from standard-in is to be captured in the 'string' out-variable and so is available to be written to standard-out, the 'try' built-in being analogous to 'if'. Rough (inconsistent) examples exist; its grammar and type system are in a slightly better state but not yet written up properly. What I could quickly add as an appendix, I have but the notation needs to be made more standard for easier comparison. I am working on another paper that will address the need for a more-concrete and standard presentation. I hope that this goes some way to answering your questions; please say if it doesn't! Rik On 22 October 2016 at 20:35, Richard Eisenberg > wrote: Hi Rik, I'm unsure what to make of your proposal, as it's hard for me to glean out what you're proposing. Do you have some sample programs written in your proposed language? What is the language's grammar? What is its type system (stated in terms of inference rules)? Having these concrete descriptors of the language would be very helpful in assessing this work. Richard On Oct 22, 2016, at 8:18 AM, Rik Howard > wrote: Dear Haskell Cafe Subscribers on the recommendation of someone for whom I have great respect, I have just subscribed to this list, it having been suggested as being a good place for me to get feedback regarding a project that I have been working on. I am humbled by the level of discussion and it feels to be a very bold step for me to request anybody's time for my words. The linked document is a four-page work-in-progress summary: the length being stipulated, potential novelty being the other main requirement. Given the requirements, the summary necessarily glosses over some details and is not yet, I fear, completely correct. The conclusion is, more or less, the one at which I am aiming; the properties are, more or less, the ones that are needed. http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf The work arises from an investigation into functional programming syntax and semantics. The novelty seems to be there but there is too a question as to whether it is simply a gimmick. I try to suggest that it is not but, by that stage, there have been many assumptions so it is hard to be sure whether the suggestion is valid. If anyone has any comments, questions or suggestions, they would be gratefully received. Yours sincerely Rik Howard _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. -- Sven SAULEAU - Xtuc Développeur Web contact at xtuc.fr 06 28 69 51 44 www.xtuc.fr https://www.linkedin.com/in/svensauleau -- Sven SAULEAU - Xtuc Développeur Web contact at xtuc.fr 06 28 69 51 44 www.xtuc.fr https://www.linkedin.com/in/svensauleau -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Mon Oct 24 06:22:40 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Mon, 24 Oct 2016 07:22:40 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: Message-ID: Thanks, I will. R On 23 October 2016 at 23:23, KC wrote: > You may want to look at > > Call-By-Push-Value > A Functional/Imperative Synthesis > By Springer > > -- > -- > > Sent from an expensive device which will be obsolete in a few months! :D > > Casey > > > On Oct 22, 2016 5:19 AM, "Rik Howard" wrote: > >> Dear Haskell Cafe Subscribers >> >> on the recommendation of someone for whom I have great respect, I have >> just subscribed to this list, it having been suggested as being a good >> place for me to get feedback regarding a project that I have been working >> on. I am humbled by the level of discussion and it feels to be a very bold >> step for me to request anybody's time for my words. >> >> The linked document is a four-page work-in-progress summary: the length >> being stipulated, potential novelty being the other main requirement. >> Given the requirements, the summary necessarily glosses over some details >> and is not yet, I fear, completely correct. The conclusion is, more or >> less, the one at which I am aiming; the properties are, more or less, the >> ones that are needed. >> >> http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf >> >> >> The work arises from an investigation into functional programming syntax >> and semantics. The novelty seems to be there but there is too a question >> as to whether it is simply a gimmick. I try to suggest that it is not but, >> by that stage, there have been many assumptions so it is hard to be sure >> whether the suggestion is valid. If anyone has any comments, questions or >> suggestions, they would be gratefully received. >> >> Yours sincerely >> Rik Howard >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clintonmead at gmail.com Mon Oct 24 06:31:48 2016 From: clintonmead at gmail.com (Clinton Mead) Date: Mon, 24 Oct 2016 17:31:48 +1100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> Message-ID: Just curious, what was the IO approach that "wasn't ready in time"? > Are you aware how "monadic IO" became the standard in Haskell? > It was one of three competing approaches, and AFAIK one turned out to be > less useful, and the other simply wasn't ready in time (so it might still > be interesting to investigate). -------------- next part -------------- An HTML attachment was scrubbed... URL: From predictivestatmech at gmail.com Mon Oct 24 12:56:54 2016 From: predictivestatmech at gmail.com (David Rogers) Date: Mon, 24 Oct 2016 08:56:54 -0400 Subject: [Haskell-cafe] A composable, local graph representation as an open discussion Message-ID: <580E0516.7070202@gmail.com> Haskell-Cafe: I have been working on the following idea, and would appreciate any comments on the novelty or usefulness in your own applications. A scan of the usual Haskell documents turns up lots of clever data structures, but nothing particularly enlightening for graphs. Here is my attempt: Graphs are difficult to represent in functional languages because they express arbitrary undirected connectivity between nodes, whereas functional code naturally expresses directed trees. Most functional algorithms for graphs use an edge-list with global labels. Although effective, this method loses compositionality and does not exploit the type system for enforcing graph invariants such as consistency of the edge list. This note presents a functional method for constructing a local representation for undirected graphs functionally as compositions of other graphs. The resulting data structure does not use unique node labels, but rather allows edge traversal from any node to its neighbor through a lookup function. Graph traversal then emerges as a discussion among static nodes. I have found this method useful for assembling sets of molecules in chemical simulations. It's also an interesting model for framing philosophical questions about the measurement problem in quantum physics. As a disclaimer, although it is useful for constructing graphs, it is not obvious how common operations like graph copying or node deletion could be performed. This note does not discuss how to implement any graph algorithms. > import qualified Prelude > import Prelude hiding ((.)) > import Data.Semigroup(Semigroup,(<>)) > import Data.Tuple(swap) First, I change the meaning of "." to be element access. I think this is a cleaner way to work with record data, and suggest that there should be a special way to use this syntax without making accessor names into global variables. > infixl 9 . > a . b = b a -- switch to member access Every subgraph has open ends, which we just number sequentially from zero. The lookup function provides the subgraph's window to the outside world. Its inputs reference outgoing connections. A subgraph, built as a composite of two subgraphs, will have the job of providing the correct lookup environment to both children. > type Conn = Int > newtype Lookup l = Lookup ( Conn -> (l, Lookup l) ) The tricky part is making the connections between the internal and external worlds. For the internal nodes to be complete, they must have access to complete external nodes. The problem is reversed for the external nodes. A naive idea is to represent a graph using a reader monad parameterized over label and result types (l,r). -- newtype Grph l r = Reader (Int -> (l, Lookup)) r Unfortunately, this breaks down because the outside world also needs to be able to `look inside' the subgraph. The above approach runs into trouble when constructing the lookup function specific to each child. That lookup function needs the outside world, and the outside world can't be completed without the ability to look inside! We capitulate to this symmetry between the graph and its environment by using a representation of a subgraph that provides both a top-down mechanism for using the graph as well as a bottom-up representation of the subgraph to the outside world. > data Grph l r = Grph { runGrph :: Lookup l -> r, > self :: Conn -> Lookup l -> (l, Lookup l), > nopen :: Int > } The default action of `running' a graph is to run a local action on each node. That local function has access to the complete graph topology via the lookup function. Since we expect this to be a fold, the result type will probably be a monoid, or at least a semigroup. Any sub-graph can be run by specifying what to do with incomplete connections. At the top-level, there should not be `open' connections. >--run g = (g.runGrph) $ Lookup (\ _ -> error "Tried to go out of top-level.") > run g = (g.runGrph) $ u > where u = Lookup $ \ _ -> ("end", u) Individual nodes are themselves subgraphs. Nodes must specify how many external connections can be made, as well as an arbitrary label and an action. > node :: Int -> l -> ((l, Lookup l) -> r) -> Grph l r > node n l run = Grph (\e -> run (l, e)) (\_ e -> (l, e)) n Arbitrary graphs are constructed by joining two subgraphs. The key here is the construction of separate lookup environments for the each subgraph. The left subgraph can be connected to the first few openings in the environment or to the right subgraph. The right subgraph can connect to the last few openings of the environment, or to the left subgraph. Each time an edge is traversed, a series of "env" calls are made -- sweeping upward until an internal connection happens. Then a downward sweep of "self" calls are made. This takes at best O(log|nodes|) operations. Connections are specified by (Conn,Conn) pairs, so we need the ability to lookup from the permutation or else to return the re-numbering after subtracting connections used by the permutation. > type Permut = [(Conn, Conn)] > find_fst :: Conn -> Permut -> Either Conn Conn > find_fst = find1 0 where > find1 n a ((a',b):tl) | a == a' = Left b -- internal > find1 n a ((a',_):tl) | a' < a = find1 (n+1) a tl > find1 n a (_:tl) = find1 n a tl > find1 n a [] = Right (a-n) -- external > find_snd b p = find_fst b (map swap p) >-- append 2 subgraphs > append :: (Semigroup r) => Permut -> Grph l r -> Grph l r -> Grph l r > append p x y = Grph { runGrph = \(Lookup env) -> > (x.runGrph) (e1 env) > <> (y.runGrph) (e2 env), > self = down, > nopen = (x.nopen) + (y.nopen) - 2*(length p) > } > where > down n (Lookup env) | n < ystart = (x.self) n (e1 env) > down n (Lookup env) = (y.self) (n-ystart) (e2 env) > e1 env = Lookup $ \n -> case find_fst n p of > Right m -> env m > Left m -> (y.self) m (e2 env) > e2 env = Lookup $ \n -> case find_snd n p of > Right m -> env (m+ystart) > Left m -> (x.self) m (e1 env) > ystart = (x.nopen) - length p -- start of b's env. refs This is a helper function for defining linear graphs. > instance Semigroup r => Semigroup (Grph l r) where > (<>) = append [(1,0)] A simple action is just to show the node labels and the labels of each immediate neighbor. > show_node (l, Lookup env) = " " ++ show l > show_env (l, Lookup env) = show l > ++ foldl (++) (":") (map (\u -> show_node(env u)) [0, 1]) > ++ "\n" The following example graphs are a list of 4 single nodes, two incomplete 2-member chains, and a complete 4-member cycle. The key feature here is that that the graphs are all composable. > c6 = [ node 2 ("C"++show n) show_env | n <- [1..4] ] > str = c6!!0 <> c6!!1 > str' = c6!!2 <> c6!!3 > cyc = append [(1,0), (0,1)] str str' -- Tying the knot. > main = putStrLn $ run cyc The connection to the measurement problem in quantum physics comes out because the final output of running any graph is deterministic, but can depend nontrivially on the graph's environment. Like links in the graph, physical systems communicate through their mutual interactions, and from those determine a new state a short time later. In a closed universe, the outcome is deterministic, while for any an open system (subgraph), the outcome is probabilistic. The analogy suggests that understanding how probabilities emerge in the measurement problem requires a two-way communication channel between the system and its environment. ~ David M. Rogers From holmisen at gmail.com Mon Oct 24 14:17:40 2016 From: holmisen at gmail.com (Johan Holmquist) Date: Mon, 24 Oct 2016 16:17:40 +0200 Subject: [Haskell-cafe] A composable, local graph representation as an open discussion In-Reply-To: <580E0516.7070202@gmail.com> References: <580E0516.7070202@gmail.com> Message-ID: The paper "Functional programming with structured graphs" might be of interest to you. It describes a way to build graphs with references back and forth. Can't provide link because my phone hides it... Den 24 okt. 2016 14:57 skrev "David Rogers" : > Haskell-Cafe: > > I have been working on the following idea, and would appreciate > any comments on the novelty or usefulness in your own applications. > A scan of the usual Haskell documents turns up lots of clever data > structures, but nothing particularly enlightening for graphs. > Here is my attempt: > > > > Graphs are difficult to represent in functional languages > because they express arbitrary undirected connectivity between nodes, > whereas functional code naturally expresses directed trees. > > Most functional algorithms for graphs use an edge-list > with global labels. Although effective, this method > loses compositionality and does not exploit the type system > for enforcing graph invariants such as consistency of the edge list. > > This note presents a functional method for constructing > a local representation for undirected graphs functionally as > compositions of other graphs. The resulting data structure > does not use unique node labels, but rather allows edge traversal > from any node to its neighbor through a lookup function. > Graph traversal then emerges as a discussion among static > nodes. I have found this method useful for assembling sets > of molecules in chemical simulations. It's also an interesting > model for framing philosophical questions about the measurement > problem in quantum physics. > > As a disclaimer, although it is useful for constructing graphs, > it is not obvious how common operations like graph > copying or node deletion could be performed. This note > does not discuss how to implement any graph algorithms. > > import qualified Prelude >> import Prelude hiding ((.)) >> import Data.Semigroup(Semigroup,(<>)) >> import Data.Tuple(swap) >> > > First, I change the meaning of "." to be element access. > I think this is a cleaner way to work with record data, > and suggest that there should be a special way to use this > syntax without making accessor names into global variables. > > infixl 9 . >> a . b = b a -- switch to member access >> > > Every subgraph has open ends, which we just number > sequentially from zero. The lookup function > provides the subgraph's window to the outside world. > Its inputs reference outgoing connections. > A subgraph, built as a composite of two > subgraphs, will have the job of providing the correct > lookup environment to both children. > > type Conn = Int >> newtype Lookup l = Lookup ( Conn -> (l, Lookup l) ) >> > > The tricky part is making the connections between > the internal and external worlds. For the internal nodes to be complete, > they must have access to complete external nodes. The problem > is reversed for the external nodes. > > A naive idea is to represent a graph using > a reader monad parameterized over label > and result types (l,r). > -- newtype Grph l r = Reader (Int -> (l, Lookup)) r > Unfortunately, this breaks down > because the outside world also needs to be able to > `look inside' the subgraph. The above approach runs into trouble > when constructing the lookup function > specific to each child. That lookup function needs the outside world, > and the outside world can't be completed without the > ability to look inside! > > We capitulate to this symmetry between the graph and its environment > by using a representation of a subgraph that provides > both a top-down mechanism for using the graph > as well as a bottom-up representation of the subgraph > to the outside world. > > data Grph l r = Grph { runGrph :: Lookup l -> r, >> self :: Conn -> Lookup l -> (l, Lookup l), >> nopen :: Int >> } >> > > The default action of `running' a graph is to run a local action > on each node. That local function has access to the complete > graph topology via the lookup function. > Since we expect this to be a fold, the result type will > probably be a monoid, or at least a semigroup. > Any sub-graph can be run by specifying what to > do with incomplete connections. At the top-level, there > should not be `open' connections. > > --run g = (g.runGrph) $ Lookup (\ _ -> error "Tried to go out of >> > top-level.") > >> run g = (g.runGrph) $ u >> where u = Lookup $ \ _ -> ("end", u) >> > > Individual nodes are themselves subgraphs. > Nodes must specify how many external connections > can be made, as well as an arbitrary label and an action. > > node :: Int -> l -> ((l, Lookup l) -> r) -> Grph l r >> node n l run = Grph (\e -> run (l, e)) (\_ e -> (l, e)) n >> > > Arbitrary graphs are constructed by joining two subgraphs. > The key here is the construction of separate lookup > environments for the each subgraph. The left subgraph > can be connected to the first few openings in the environment > or to the right subgraph. The right subgraph can connect > to the last few openings of the environment, or to the > left subgraph. Each time an edge is traversed, > a series of "env" calls are made -- sweeping upward > until an internal connection happens. Then a downward > sweep of "self" calls are made. This takes at best > O(log|nodes|) operations. > > Connections are specified by (Conn,Conn) pairs, > so we need the ability to lookup from the permutation > or else to return the re-numbering after subtracting > connections used by the permutation. > > type Permut = [(Conn, Conn)] >> find_fst :: Conn -> Permut -> Either Conn Conn >> find_fst = find1 0 where >> find1 n a ((a',b):tl) | a == a' = Left b -- internal >> find1 n a ((a',_):tl) | a' < a = find1 (n+1) a tl >> find1 n a (_:tl) = find1 n a tl >> find1 n a [] = Right (a-n) -- external >> find_snd b p = find_fst b (map swap p) >> > > -- append 2 subgraphs >> append :: (Semigroup r) => Permut -> Grph l r -> Grph l r -> Grph l r >> append p x y = Grph { runGrph = \(Lookup env) -> >> (x.runGrph) (e1 env) >> <> (y.runGrph) (e2 env), >> self = down, >> nopen = (x.nopen) + (y.nopen) - 2*(length p) >> } >> where >> down n (Lookup env) | n < ystart = (x.self) n (e1 env) >> down n (Lookup env) = (y.self) (n-ystart) (e2 env) >> e1 env = Lookup $ \n -> case find_fst n p of >> Right m -> env m >> Left m -> (y.self) m (e2 env) >> e2 env = Lookup $ \n -> case find_snd n p of >> Right m -> env (m+ystart) >> Left m -> (x.self) m (e1 env) >> ystart = (x.nopen) - length p -- start of b's env. refs >> > > This is a helper function for defining linear graphs. > > instance Semigroup r => Semigroup (Grph l r) where >> (<>) = append [(1,0)] >> > > A simple action is just to show the node labels and > the labels of each immediate neighbor. > > show_node (l, Lookup env) = " " ++ show l >> show_env (l, Lookup env) = show l >> ++ foldl (++) (":") (map (\u -> show_node(env u)) [0, 1]) >> ++ "\n" >> > > The following example graphs are a list of 4 single nodes, > two incomplete 2-member chains, and a complete 4-member cycle. > The key feature here is that that the graphs are all composable. > > c6 = [ node 2 ("C"++show n) show_env | n <- [1..4] ] >> str = c6!!0 <> c6!!1 >> str' = c6!!2 <> c6!!3 >> cyc = append [(1,0), (0,1)] str str' -- Tying the knot. >> main = putStrLn $ run cyc >> > > The connection to the measurement problem in quantum physics > comes out because the final output of running any graph > is deterministic, but can depend nontrivially on the graph's environment. > Like links in the graph, physical systems communicate through > their mutual interactions, and from those determine a new state > a short time later. In a closed universe, the outcome is deterministic, > while for any an open system (subgraph), the outcome is probabilistic. > The analogy suggests that understanding how probabilities > emerge in the measurement problem requires a > two-way communication channel between the system and its environment. > > ~ David M. Rogers > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjljr2 at gmail.com Mon Oct 24 14:50:07 2016 From: rjljr2 at gmail.com (Ronald Legere) Date: Mon, 24 Oct 2016 07:50:07 -0700 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> Message-ID: I must admit to some curiosity about this as well. My recollection was that the original approach was to use lazy streams IO:: [request] -> [respose]. This can be managed a bit better using continuations (Perhaps continuations can also be considered a separate approach?) And now we have the IO Monad. (which can be defined in terms of the stream based approach but is not implemented that way) The only other approach I am aware of is Clean's "Unique types". On Sun, Oct 23, 2016 at 11:31 PM, Clinton Mead wrote: > Just curious, what was the IO approach that "wasn't ready in time"? > > >> Are you aware how "monadic IO" became the standard in Haskell? >> It was one of three competing approaches, and AFAIK one turned out to be >> less useful, and the other simply wasn't ready in time (so it might still >> be interesting to investigate). > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -- Ron Legere (rjljr2 at gmail.com) *C'est le temps que tu as perdu pour ta rose qui fait ta rose si importante* -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at kent.ac.uk Mon Oct 24 16:40:13 2016 From: S.J.Thompson at kent.ac.uk (Simon Thompson) Date: Mon, 24 Oct 2016 17:40:13 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> Message-ID: <8A0261F2-917E-4E34-9078-7F7E3FC0E8FB@kent.ac.uk> Miranda modelled IO as a (lazy) function from [Char] -> [Char] … pretty unwieldy to use “raw”, but it was made controllable by a set of combinators that could be seen as a proto-monadic library. Simon Y. > On 24 Oct 2016, at 15:50, Ronald Legere wrote: > > I must admit to some curiosity about this as well. My recollection was that the original approach was to use lazy streams > IO:: [request] -> [respose]. > > This can be managed a bit better using continuations (Perhaps continuations can also be considered a separate approach?) > > And now we have the IO Monad. (which can be defined in terms of the stream based approach but is not implemented that way) > > The only other approach I am aware of is Clean's "Unique types". Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson at kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Mon Oct 24 22:06:19 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Mon, 24 Oct 2016 23:06:19 +0100 Subject: [Haskell-cafe] What did you/do you find hard/dislike about Opaleye? Message-ID: <20161024220619.GK4593@weber> I'm planning to update and refresh Opaleye's documentation and add some new functionality for common use cases. To help with this I'd like to request input from anyone who has ever tried the library. Specifically, what do you (or did you) find hard about using Opaleye and what did you dislike about it? If you tried it and gave up, what was the major sticking point? I'm already aware that many people dislike the boilerplate involved in defining your tables and types, and the polymorphic products are particularly uncomfortable for some. You don't need to mention these issues unless you particularly want to! Feel free to contact me privately or reply directly. Many thanks. Tom From ivan.miljenovic at gmail.com Mon Oct 24 22:49:55 2016 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Tue, 25 Oct 2016 09:49:55 +1100 Subject: [Haskell-cafe] A composable, local graph representation as an open discussion In-Reply-To: <580E0516.7070202@gmail.com> References: <580E0516.7070202@gmail.com> Message-ID: On 24 October 2016 at 23:56, David Rogers wrote: > Haskell-Cafe: > > I have been working on the following idea, and would appreciate > any comments on the novelty or usefulness in your own applications. > A scan of the usual Haskell documents turns up lots of clever data > structures, but nothing particularly enlightening for graphs. > Here is my attempt: I haven't looked through your entire email in detail, but from a quick skim there's a few interesting ideas I want to play with. > Graphs are difficult to represent in functional languages > because they express arbitrary undirected connectivity between nodes, > whereas functional code naturally expresses directed trees. > > Most functional algorithms for graphs use an edge-list > with global labels. Although effective, this method > loses compositionality and does not exploit the type system > for enforcing graph invariants such as consistency of the edge list. > > This note presents a functional method for constructing > a local representation for undirected graphs functionally as > compositions of other graphs. The resulting data structure > does not use unique node labels, >From practice, I've found that unique node labels are extremely important/useful; so are unique edge labels. As such, this means that this representation may not be sufficient for general graph processing. > but rather allows edge traversal > from any node to its neighbor through a lookup function. > Graph traversal then emerges as a discussion among static > nodes. I have found this method useful for assembling sets > of molecules in chemical simulations. It's also an interesting > model for framing philosophical questions about the measurement > problem in quantum physics. > > As a disclaimer, although it is useful for constructing graphs, > it is not obvious how common operations like graph > copying or node deletion could be performed. This note > does not discuss how to implement any graph algorithms. > >> import qualified Prelude >> import Prelude hiding ((.)) >> import Data.Semigroup(Semigroup,(<>)) >> import Data.Tuple(swap) > > > First, I change the meaning of "." to be element access. > I think this is a cleaner way to work with record data, > and suggest that there should be a special way to use this > syntax without making accessor names into global variables. > >> infixl 9 . >> a . b = b a -- switch to member access > > > Every subgraph has open ends, which we just number > sequentially from zero. The lookup function > provides the subgraph's window to the outside world. > Its inputs reference outgoing connections. > A subgraph, built as a composite of two > subgraphs, will have the job of providing the correct > lookup environment to both children. > >> type Conn = Int >> newtype Lookup l = Lookup ( Conn -> (l, Lookup l) ) > > > The tricky part is making the connections between > the internal and external worlds. For the internal nodes to be complete, > they must have access to complete external nodes. The problem > is reversed for the external nodes. > > A naive idea is to represent a graph using > a reader monad parameterized over label > and result types (l,r). > -- newtype Grph l r = Reader (Int -> (l, Lookup)) r > Unfortunately, this breaks down > because the outside world also needs to be able to > `look inside' the subgraph. The above approach runs into trouble > when constructing the lookup function > specific to each child. That lookup function needs the outside world, > and the outside world can't be completed without the > ability to look inside! > > We capitulate to this symmetry between the graph and its environment > by using a representation of a subgraph that provides > both a top-down mechanism for using the graph > as well as a bottom-up representation of the subgraph > to the outside world. > >> data Grph l r = Grph { runGrph :: Lookup l -> r, >> self :: Conn -> Lookup l -> (l, Lookup l), >> nopen :: Int >> } > > > The default action of `running' a graph is to run a local action > on each node. That local function has access to the complete > graph topology via the lookup function. > Since we expect this to be a fold, the result type will > probably be a monoid, or at least a semigroup. > Any sub-graph can be run by specifying what to > do with incomplete connections. At the top-level, there > should not be `open' connections. > >> --run g = (g.runGrph) $ Lookup (\ _ -> error "Tried to go out of > > top-level.") >> >> run g = (g.runGrph) $ u >> where u = Lookup $ \ _ -> ("end", u) > > > Individual nodes are themselves subgraphs. > Nodes must specify how many external connections > can be made, as well as an arbitrary label and an action. > >> node :: Int -> l -> ((l, Lookup l) -> r) -> Grph l r >> node n l run = Grph (\e -> run (l, e)) (\_ e -> (l, e)) n > > > Arbitrary graphs are constructed by joining two subgraphs. > The key here is the construction of separate lookup > environments for the each subgraph. The left subgraph > can be connected to the first few openings in the environment > or to the right subgraph. The right subgraph can connect > to the last few openings of the environment, or to the > left subgraph. Each time an edge is traversed, > a series of "env" calls are made -- sweeping upward > until an internal connection happens. Then a downward > sweep of "self" calls are made. This takes at best > O(log|nodes|) operations. > > Connections are specified by (Conn,Conn) pairs, > so we need the ability to lookup from the permutation > or else to return the re-numbering after subtracting > connections used by the permutation. > >> type Permut = [(Conn, Conn)] >> find_fst :: Conn -> Permut -> Either Conn Conn >> find_fst = find1 0 where >> find1 n a ((a',b):tl) | a == a' = Left b -- internal >> find1 n a ((a',_):tl) | a' < a = find1 (n+1) a tl >> find1 n a (_:tl) = find1 n a tl >> find1 n a [] = Right (a-n) -- external >> find_snd b p = find_fst b (map swap p) > > >> -- append 2 subgraphs >> append :: (Semigroup r) => Permut -> Grph l r -> Grph l r -> Grph l r >> append p x y = Grph { runGrph = \(Lookup env) -> >> (x.runGrph) (e1 env) >> <> (y.runGrph) (e2 env), >> self = down, >> nopen = (x.nopen) + (y.nopen) - 2*(length p) >> } >> where >> down n (Lookup env) | n < ystart = (x.self) n (e1 env) >> down n (Lookup env) = (y.self) (n-ystart) (e2 env) >> e1 env = Lookup $ \n -> case find_fst n p of >> Right m -> env m >> Left m -> (y.self) m (e2 env) >> e2 env = Lookup $ \n -> case find_snd n p of >> Right m -> env (m+ystart) >> Left m -> (x.self) m (e1 env) >> ystart = (x.nopen) - length p -- start of b's env. refs > > > This is a helper function for defining linear graphs. > >> instance Semigroup r => Semigroup (Grph l r) where >> (<>) = append [(1,0)] > > > A simple action is just to show the node labels and > the labels of each immediate neighbor. > >> show_node (l, Lookup env) = " " ++ show l >> show_env (l, Lookup env) = show l >> ++ foldl (++) (":") (map (\u -> show_node(env u)) [0, 1]) >> ++ "\n" > > > The following example graphs are a list of 4 single nodes, > two incomplete 2-member chains, and a complete 4-member cycle. > The key feature here is that that the graphs are all composable. > >> c6 = [ node 2 ("C"++show n) show_env | n <- [1..4] ] >> str = c6!!0 <> c6!!1 >> str' = c6!!2 <> c6!!3 >> cyc = append [(1,0), (0,1)] str str' -- Tying the knot. >> main = putStrLn $ run cyc > > > The connection to the measurement problem in quantum physics > comes out because the final output of running any graph > is deterministic, but can depend nontrivially on the graph's environment. > Like links in the graph, physical systems communicate through > their mutual interactions, and from those determine a new state > a short time later. In a closed universe, the outcome is deterministic, > while for any an open system (subgraph), the outcome is probabilistic. > The analogy suggests that understanding how probabilities > emerge in the measurement problem requires a > two-way communication channel between the system and its environment. > > ~ David M. Rogers > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From rik at dcs.bbk.ac.uk Tue Oct 25 07:12:09 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Tue, 25 Oct 2016 08:12:09 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: Message-ID: Hello Jo apologies for the delayed reply! Thank you for your time and thoughts. (I'd not realised that some messages come only through the digest. And apologies generally for taking more bandwidth after I said I wouldn't -- I hadn't realised then that there were emails lacking a response; this might be tl;dr as it is a response to two.) It is a programme for designing a programming language. > >From about this point in the design of the language, all you can do can be defined as functions (e.g., logical negation). So introducing a symbol for an operation becomes something between syntactic sugar and an opportunity for optimisation. At least, that's the thought process so far. It is leaving out a number of central issues: How to approach modularity The note needs to be more clear that abstraction, application and let-blocks are an extended Lambda calculus so giving higher-order functions; it is to be lazy (or normal order, at least). A packaging system has been conceived for the language, as have some other features, some of which may bear traces of novelty, but space is tight and the variable parameters need to get treated first. The note tends to focus on the would-be type system because that is where the presence of out-vars makes itself most apparent. whether it should have opaque types (and why), whether there should be > subtypes or not, how the type system is supposed to deal with arithmetic > which has almost-compatible integer types and floating-point types. That's > just off the top of my head, I am pretty sure that there are other issues. > It is hard to discuss merits or problems at this stage, since all of these > issues tend to influence each other. There are gaps in the design. This step is to show that at least those considerations haven't been precluded in some way by the mix of features. One thing I have heard is that effects, subtypes and type system soundness > do not mix well. Subtypes are too useful to ignore, unsound types systems > are not worth the effort, so I find it a bit surprising that the paper has > nothing to say about the issue. I'm not sure what you mean by effects (may I ask you to elaborate? Or side effects maybe?) but subtypes would appear to offer an intuitive analogy with set theory. It would mean extra look-ups in the deciding function to check inclusion, possibly using some sort of 'narrowable' type, and that would make showing soundness that much more involved. Are there other beyond-the-norm complications? Are you aware how "monadic IO" became the standard in Haskell? It was one > of three competing approaches, and AFAIK one turned out to be less useful, > and the other simply wasn't ready in time (so it might still be interesting > to investigate). No, I'm not, it sounds fascinating. Thank you for subsequently providing references. > For IO, ... variable parameters. What's the advantage here? Given the obvious strong disadvantage that it > forces callers into an idiom that uses updatable data structures, the > advantage better be compelling. The out-vars are the same as other variables in terms of updating: they have to be fresh on the way in and can't be modified after coming out -- I should make that more clear -- or was that not what you meant? The difference (I don't know that it can be called an advantage) is that IO can be done pretty much wherever-whenever but the insistence of a try-then-else for penultimate invocations forces that doing not to be unnoticed. >... the 'try' built-in being analogous to 'if'. What is the analogy? That stuff is evaluated only on a by-need basis? > That's already there in Haskell. Yes. And at the risk of labouring the point, 'if' has a true-false condition determining which expression to evaluate; 'try' has an okay-error condition for the same. Right now I fail to see what's new&better in this. Some languages allow IO expressions without any further thought being paid to the matter; some provide explicit mechanisms for dealing with IO. The language in the note takes a mid-way approach, in some sense, that I'm not familiar with from elsewhere. Assuming that this approach isn't in a language that I should know by now, could the approach not count as new? It may be irrelevant on some level, I suppose. I hope that this goes some way towards being an adequate response. Once again, thank you for your invaluable feedback -- much appreciated! R On 24 October 2016 at 07:22, Rik Howard wrote: > Thanks, I will. > > R > > > On 23 October 2016 at 23:23, KC wrote: > >> You may want to look at >> >> Call-By-Push-Value >> A Functional/Imperative Synthesis >> By Springer >> >> -- >> -- >> >> Sent from an expensive device which will be obsolete in a few months! :D >> >> Casey >> >> >> On Oct 22, 2016 5:19 AM, "Rik Howard" wrote: >> >>> Dear Haskell Cafe Subscribers >>> >>> on the recommendation of someone for whom I have great respect, I have >>> just subscribed to this list, it having been suggested as being a good >>> place for me to get feedback regarding a project that I have been working >>> on. I am humbled by the level of discussion and it feels to be a very bold >>> step for me to request anybody's time for my words. >>> >>> The linked document is a four-page work-in-progress summary: the length >>> being stipulated, potential novelty being the other main requirement. >>> Given the requirements, the summary necessarily glosses over some details >>> and is not yet, I fear, completely correct. The conclusion is, more or >>> less, the one at which I am aiming; the properties are, more or less, the >>> ones that are needed. >>> >>> http://www.dcs.bbk.ac.uk/~rik/gallery/work-in-progress/document.pdf >>> >>> >>> The work arises from an investigation into functional programming syntax >>> and semantics. The novelty seems to be there but there is too a question >>> as to whether it is simply a gimmick. I try to suggest that it is not but, >>> by that stage, there have been many assumptions so it is hard to be sure >>> whether the suggestion is valid. If anyone has any comments, questions or >>> suggestions, they would be gratefully received. >>> >>> Yours sincerely >>> Rik Howard >>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> To (un)subscribe, modify options or view archives go to: >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> Only members subscribed via the mailman list are allowed to post. >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ollie at ocharles.org.uk Tue Oct 25 11:49:13 2016 From: ollie at ocharles.org.uk (Oliver Charles) Date: Tue, 25 Oct 2016 12:49:13 +0100 Subject: [Haskell-cafe] What did you/do you find hard/dislike about Opaleye? In-Reply-To: <20161024220619.GK4593@weber> References: <20161024220619.GK4593@weber> Message-ID: On Mon, Oct 24, 2016 at 11:06 PM, Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk> wrote: > I'm planning to update and refresh Opaleye's documentation and add some new > functionality for common use cases. To help with this I'd like to request > input from anyone who has ever tried the library. > > Specifically, what do you (or did you) find hard about using Opaleye and > what did you dislike about it? If you tried it and gave up, what was the > major sticking point? > I dislike the whole `Default` type class stuff and have a very hard time reasoning about what is going on behind it. I understand it's basically creating "n-ary structures" (the description is as vague as my understanding), but I still struggle with it. My preference here is specific type classes for the operations that need constraints on what a table is (which could be derived generically on base data types). Worse, the lack of type inference on left joins is an absolute killer. Knowing that my application will ultimately need a left join at some point makes me very uneasy about introducing Opaleye, because I just know how frustrating it's going to be when I get to that point. I dislike the need for arrows (and lets be honest, it really is a need - using just functor/applicative/category leads to even less readable code), but as we both know - no one has found a viable alternative yet. > I'm already aware that many people dislike the boilerplate involved in > defining your tables and types, and the polymorphic products are > particularly uncomfortable for some. You don't need to mention these issues > unless you particularly want to! > I do want to, because they prevent me from using the library as is. Instead, I use it as an implementation layer and have to roll my own API on top. I hope this is constructive, I don't intend this to be just a rant. I am still using Opaleye, in spite of these issues! - ocharles -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Tue Oct 25 11:56:13 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Tue, 25 Oct 2016 12:56:13 +0100 Subject: [Haskell-cafe] What did you/do you find hard/dislike about Opaleye? In-Reply-To: References: <20161024220619.GK4593@weber> Message-ID: <20161025115613.GO4593@weber> On Tue, Oct 25, 2016 at 12:49:13PM +0100, Oliver Charles wrote: > On Mon, Oct 24, 2016 at 11:06 PM, Tom Ellis < > tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk> wrote: > > > I'm planning to update and refresh Opaleye's documentation and add some new > > functionality for common use cases. To help with this I'd like to request > > input from anyone who has ever tried the library. > > > > Specifically, what do you (or did you) find hard about using Opaleye and > > what did you dislike about it? If you tried it and gave up, what was the > > major sticking point? > > I dislike the whole `Default` type class stuff and have a very hard time > reasoning about what is going on behind it. I understand it's basically > creating "n-ary structures" (the description is as vague as my > understanding), but I still struggle with it. Hi Ollie, Thanks for the feedback. Have you read the "Default Explanation" document? https://github.com/tomjaguarpaw/haskell-opaleye/blob/master/Doc/Tutorial/DefaultExplanation.lhs Does that help you understand what's going on behind Default? Is there still something missing that could help your understanding? Tom From ollie at ocharles.org.uk Tue Oct 25 12:29:48 2016 From: ollie at ocharles.org.uk (Oliver Charles) Date: Tue, 25 Oct 2016 12:29:48 +0000 Subject: [Haskell-cafe] What did you/do you find hard/dislike about Opaleye? In-Reply-To: <20161025115613.GO4593@weber> References: <20161024220619.GK4593@weber> <20161025115613.GO4593@weber> Message-ID: On Tue, Oct 25, 2016 at 12:56 PM Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk> wrote: > Hi Ollie, > > Thanks for the feedback. Have you read the "Default Explanation" document? > > > https://github.com/tomjaguarpaw/haskell-opaleye/blob/master/Doc/Tutorial/DefaultExplanation.lhs > > Does that help you understand what's going on behind Default? Is there > still something missing that could help your understanding? > Hi, I wasn't aware of this documentation, which I suppose is another point - there the should be a central location for complete documentation. For me, I expect everything relevant to be linked from the top-level module on Hackage (Opaleye, in this case). - ocharles -------------- next part -------------- An HTML attachment was scrubbed... URL: From predictivestatmech at gmail.com Tue Oct 25 13:44:50 2016 From: predictivestatmech at gmail.com (David Rogers) Date: Tue, 25 Oct 2016 09:44:50 -0400 Subject: [Haskell-cafe] A composable, local graph representation as an open discussion In-Reply-To: References: <580E0516.7070202@gmail.com> Message-ID: <580F61D2.6040200@gmail.com> On 10/24/16 6:49 PM, Ivan Lazar Miljenovic wrote: > On 24 October 2016 at 23:56, David Rogers wrote: >> Haskell-Cafe: >> >> I have been working on the following idea, and would appreciate >> any comments on the novelty or usefulness in your own applications. >> A scan of the usual Haskell documents turns up lots of clever data >> structures, but nothing particularly enlightening for graphs. >> Here is my attempt: > I haven't looked through your entire email in detail, but from a quick > skim there's a few interesting ideas I want to play with. > >> Graphs are difficult to represent in functional languages >> because they express arbitrary undirected connectivity between nodes, >> whereas functional code naturally expresses directed trees. >> >> Most functional algorithms for graphs use an edge-list >> with global labels. Although effective, this method >> loses compositionality and does not exploit the type system >> for enforcing graph invariants such as consistency of the edge list. >> >> This note presents a functional method for constructing >> a local representation for undirected graphs functionally as >> compositions of other graphs. The resulting data structure >> does not use unique node labels, > From practice, I've found that unique node labels are extremely > important/useful; so are unique edge labels. As such, this means that > this representation may not be sufficient for general graph > processing. I started with a version of the code that generates sequential node numbers. It requires 2 changes. First, the Grph structure has to store a count of total internal nodes. Second, the run and env functions must pass the starting number to each sub-graph. It's easy to see that this generates sequential numbers, since the run function does a tree-traversal down to the nodes, and the number of internal nodes is known for each subgraph. ~ David From monkleyon at googlemail.com Tue Oct 25 13:55:07 2016 From: monkleyon at googlemail.com (MarLinn) Date: Tue, 25 Oct 2016 15:55:07 +0200 Subject: [Haskell-cafe] A composable, local graph representation as an open discussion In-Reply-To: <580E0516.7070202@gmail.com> References: <580E0516.7070202@gmail.com> Message-ID: <5c513b22-b462-0d9c-21dc-bd4b7b9ea6f7@gmail.com> Hi, first of all, this is an interesting idea. > Most functional algorithms for graphs use an edge-list > with global labels. Although effective, this method > loses compositionality and does not exploit the type system > for enforcing graph invariants such as consistency of the edge list. I understand the argument, but aren't you are still using global labels? Or rather, global numbering. Doesn't that defeat the purpose? Therefore I propose to replace >> type Conn = Int with > type Port = Int > data Connector l r = InternalConn Port | ExternalConn (Graph l r) Port or maybe, to make organizing simpler > data Connection l r = Internal Port Port | Outgoing Port (Graph l r) Port | External (Graph l r) Port (Graph l r) Port Now lookup via list traversals makes less sense, but then I would propose you store different types of connections separately anyway. A second thing to note is that there seem to be only three general ways to implement graphs in Haskell (/purely functional languages): adjacency lists/matrices, tying the knot, or with native pointers through the FFI (I haven't seen that one in the wild though). You used the second approach, which is why updating the graph is hard. That doesn't mean your general approach of composing graphs can not be combined with the other two. In fact it looks like combining it with the "classical" adjacency lists should be as simple as throwing some IntMap operations together. Cheers, MarLinn From harendra.kumar at gmail.com Tue Oct 25 16:59:27 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 25 Oct 2016 22:29:27 +0530 Subject: [Haskell-cafe] [ANN] unicode-transforms-0.2.0 pure Haskell unicode normalization Message-ID: Hi, I released unicode-transforms sometime back as bindings to a C library (utf8proc). Since then I have rewritten it completely in Haskell. Haskell data structures are automatically generated from unicode database, so it can be kept up-to-date with the standard unlike the C implementation which was stuck at unicode 5. The implementation comes with a test suite providing 100% code coverage. After a number of algorithmic and implementation efficiency optimizations, I was able to get several times better decompose performance compared to the C implementation. I have not yet got a chance to fully optimize the compose operations but they are still as fast as utf8proc. I would like to thank Antonio Nikishaev for the unicode character database parsing code which I borrowed from the prose library. https://github.com/harendra-kumar/unicode-transforms https://hackage.haskell.org/package/unicode-transforms -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.yager at gmail.com Tue Oct 25 17:06:15 2016 From: will.yager at gmail.com (William Yager) Date: Tue, 25 Oct 2016 12:06:15 -0500 Subject: [Haskell-cafe] [ANN] unicode-transforms-0.2.0 pure Haskell unicode normalization In-Reply-To: References: Message-ID: Interesting! What would you say allowed you to get better decompose performance than the C library? Will On Tue, Oct 25, 2016 at 11:59 AM, Harendra Kumar wrote: > Hi, > > I released unicode-transforms sometime back as bindings to a C library > (utf8proc). Since then I have rewritten it completely in Haskell. Haskell > data structures are automatically generated from unicode database, so it > can be kept up-to-date with the standard unlike the C implementation which > was stuck at unicode 5. The implementation comes with a test suite > providing 100% code coverage. > > After a number of algorithmic and implementation efficiency optimizations, > I was able to get several times better decompose performance compared to > the C implementation. I have not yet got a chance to fully optimize the > compose operations but they are still as fast as utf8proc. > > I would like to thank Antonio Nikishaev for the unicode character database > parsing code which I borrowed from the prose library. > > https://github.com/harendra-kumar/unicode-transforms > https://hackage.haskell.org/package/unicode-transforms > > -harendra > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Tue Oct 25 17:34:14 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 25 Oct 2016 23:04:14 +0530 Subject: [Haskell-cafe] [ANN] unicode-transforms-0.2.0 pure Haskell unicode normalization In-Reply-To: References: Message-ID: I did not fully compare the implementation, I just focussed on getting as much performance out of the Haskell implementation as was possible. I can say two things that might have allowed it to be better: 1) I extracted as much as was possible in terms of implementation efficiency of the Haskell code. So I did not lose there. The code could have been much simpler without all the optimizations. 2) My implementation may be better in terms of algorithms and data structures used. Unicode normalization is complicated, the implementation can differ in many ways making you lose or gain performance. Beating the utf8proc implementation was easy. The best (highly optimized) normalization implementation is the ICU C++ implementation and my target was to get close to that. I got pretty close to it (using llvm backend) in most benchmarks and even beat it clearly in one benchmark. There are a couple of enhancements that I filed against GHC, hopefully they will allow it to be completely at par in all benchmarks. Though the difference may not matter other than proving that it can be as good. -harendra On 25 October 2016 at 22:36, William Yager wrote: > Interesting! What would you say allowed you to get better decompose > performance than the C library? > > Will > > On Tue, Oct 25, 2016 at 11:59 AM, Harendra Kumar > wrote: > >> Hi, >> >> I released unicode-transforms sometime back as bindings to a C library >> (utf8proc). Since then I have rewritten it completely in Haskell. Haskell >> data structures are automatically generated from unicode database, so it >> can be kept up-to-date with the standard unlike the C implementation which >> was stuck at unicode 5. The implementation comes with a test suite >> providing 100% code coverage. >> >> After a number of algorithmic and implementation efficiency >> optimizations, I was able to get several times better decompose performance >> compared to the C implementation. I have not yet got a chance to fully >> optimize the compose operations but they are still as fast as utf8proc. >> >> I would like to thank Antonio Nikishaev for the unicode character >> database parsing code which I borrowed from the prose library. >> >> https://github.com/harendra-kumar/unicode-transforms >> https://hackage.haskell.org/package/unicode-transforms >> >> -harendra >> >> _______________________________________________ >> Haskell-Cafe mailing list >> To (un)subscribe, modify options or view archives go to: >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> Only members subscribed via the mailman list are allowed to post. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at cs.otago.ac.nz Wed Oct 26 00:19:35 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Wed, 26 Oct 2016 13:19:35 +1300 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <0AAFE48F-89D4-46BB-9221-3E5D519316D4@cs.brynmawr.edu> <245a0baa-8b00-bf23-5398-986258fec4bb@durchholz.org> Message-ID: <70e6480d-321f-9a61-73fd-34a53b119525@cs.otago.ac.nz> On 25/10/16 3:50 AM, Ronald Legere wrote: > The only other approach I am aware of is Clean's "Unique types". That approach was also adopted in the programming language Mercury, which includes both statically typed and moded logic programming and statically typed functional programming, using Haskell-ish types. It's not clear to me why Haskell, Clean, and Mercury don't already qualify as "procedural-functional languages", especially when you consider that the logic programming part of Mercury provides a very clean approach to result arguments. From jo at durchholz.org Wed Oct 26 04:36:05 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Wed, 26 Oct 2016 06:36:05 +0200 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: Message-ID: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> Am 25.10.2016 um 09:12 schrieb Rik Howard: > whether it should have opaque types (and why), whether there should > be subtypes or not, how the type system is supposed to deal with > arithmetic which has almost-compatible integer types and > floating-point types. That's just off the top of my head, I am > pretty sure that there are other issues. It is hard to discuss > merits or problems at this stage, since all of these issues tend to > influence each other. > > > There are gaps in the design. This step is to show that at least those > considerations haven't been precluded in some way by the mix of features. Question is not whether these things are precluded, question is how you want to tackle them. It's not even stating design goals here. > One thing I have heard is that effects, subtypes and type system > soundness do not mix well. Subtypes are too useful to ignore, > unsound types systems are not worth the effort, so I find it a bit > surprising that the paper has nothing to say about the issue. > > > I'm not sure what you mean by effects (may I ask you to elaborate? Or > side effects maybe?) Yes. > but subtypes would appear to offer an intuitive > analogy with set theory. That's the least interesting part of subtypes, actually. The salient point of this and some other features is that they make it easier to reason about a given program's properties, at the expense of making programming harder. (One of the major design points in designing a new programming language is where exactly to place that trade-off, and a lot of streaks of genius went into easing the burden on the programmer.) > It would mean extra look-ups in the deciding > function to check inclusion, possibly using some sort of 'narrowable' > type, and that would make showing soundness that much more involved. > Are there other beyond-the-norm complications? Lots. The basic concept of subtypes is simple, but establishing a definition of "subtype" that is both useful and sound is far from trivial. For example. mutable circles and mutable ellipses are not in a subtype relationship to each other if there is an updating "scale" operation with an x and y scaling factor (you cannot guarantee that a scaled circle stays circular). The design space for dealing with this is far from fully explored. Also, subtypes and binary operators do not really mix; google for "parallel type hierarchy". (The core of the problem is that if you make Byte a subtype of Word, declaring the (+) operator in Word as Word -> Word will preclude Byte from being a subtype because you want a covariant signature in Byte but that violates subtyping rules for functions. So you need parametric polymorphism, but now you cannot use the simple methods for subtyping anymore.) > Are you aware how "monadic IO" became the standard in Haskell? It > was one of three competing approaches, and AFAIK one turned out to > be less useful, and the other simply wasn't ready in time (so it > might still be interesting to investigate). > > > No, I'm not, it sounds fascinating. Thank you for subsequently > providing references. > > > For IO, ... variable parameters. > > What's the advantage here? Given the obvious strong disadvantage > that it forces callers into an idiom that uses updatable data > structures, the advantage better be compelling. > > The out-vars are the same as other variables in terms of updating: they > have to be fresh on the way in and can't be modified after coming out -- > I should make that more clear Oh, you don't have in-place updates, you have just initialization? I missed that. The key point to mention is that you want to maintain referential integrity. BTW this still makes loops useless for putting values in variables, because you can't update variables in an iteration; programmers will still have to write recursive functions. BTW nobody who is familiar with functional languages would consider that a disadvantage. Speaking of user groups: I am not sure what crowd you want to attract with your design. It's not necessary to put that into the paper, but one of the things that went "er, what?" in the back of my head was that I could not infer for whom this kind of language would be useful. > -- or was that not what you meant? The > difference (I don't know that it can be called an advantage) is that IO > can be done pretty much wherever-whenever but the insistence of a > try-then-else for penultimate invocations forces that doing not to be > unnoticed. Sounds pretty much like the conseqences of having the IO monad in Haskell. I think you should elaborate similarities and differences with how Haskell does IO, that's a well-known standard it is going to make the paper easier to read. Same goes for Clean&Mercury. > Right now I fail to see what's new&better in this. > > Some languages allow IO expressions without any further thought being > paid to the matter; some provide explicit mechanisms for dealing with > IO. The language in the note takes a mid-way approach, in some sense, > that I'm not familiar with from elsewhere. Assuming that this approach > isn't in a language that I should know by now, could the approach not > count as new? It may be irrelevant on some level, I suppose. It's hard to tell whether it is actually new, too many details are missing. > I hope that this goes some way towards being an adequate response. Once > again, thank you for your invaluable feedback -- much appreciated! You're welcome :-) Regards Jo From zkessin at gmail.com Wed Oct 26 06:57:05 2016 From: zkessin at gmail.com (Zachary Kessin) Date: Wed, 26 Oct 2016 09:57:05 +0300 Subject: [Haskell-cafe] Suggestion for error message improvement Message-ID: When I compiled some code this morning I got an error that said that I should edit my cabal file, but I notice that one thing is missing, the path to the cabal file. Having that in the error message would make fixing it a bit easier and faster in that emacs could take me right to the correct cabal file and that I would know for sure which one it was using. I tried filing this as a bug on the stack github repo but they said it is a ghc issue. [1 of 1] Compiling Main ( tests/test.hs, .stack-work/dist/x86_64-linux/Cabal-1.24.0.0/build/test/test-tmp/Main.o ) /home/zkessin/Documents/*****/tests/test.hs:17:1: error: Failed to load interface for ‘Control.Lens’ It is a member of the hidden package ‘lens-4.14’. Perhaps you need to add ‘lens’ to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Progress: 1/2 -- While building package ***********-0.1.0.0 using: /home/zkessin/.stack/setup-exe-cache/x86_64-linux/setup-Simple-Cabal-1.24.0.0-ghc-8.0.1 --builddir=.stack-work/dist/x86_64-linux/Cabal-1.24.0.0 build lib:********** test:test --ghc-options " -ddump-hi -ddump-to-file" Process exited with code: ExitFailure 1 -- Zach Kessin SquareTarget Twitter: @zkessin Skype: zachkessin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Oct 26 07:05:33 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 26 Oct 2016 00:05:33 -0700 Subject: [Haskell-cafe] Suggestion for error message improvement In-Reply-To: References: Message-ID: <1477465372-sup-7777@sabre> It's a very good suggestion, and also a bit fiddly to implement. The trouble is that this error message is produced by GHC, but GHC has no idea where the Cabal file lives (separation of concerns)! One possible way to work around this is for there to somehow be a way to indicate the name of the Cabal file being built so that GHC could build an error message, but for editor support I imagine you would much rather have the proper line. And then there's yet another yak to shave, which is that the Cabal library doesn't keep track of line/col of source elements (although https://github.com/haskell/cabal/pull/3602 should put a dent in this yak.) Anyway, if you still think it's worth fixing please do file a ticket on the GHC Trac. Edward Excerpts from Zachary Kessin's message of 2016-10-26 09:57:05 +0300: > When I compiled some code this morning I got an error that said that I > should edit my cabal file, but I notice that one thing is missing, the path > to the cabal file. Having that in the error message would make fixing it a > bit easier and faster in that emacs could take me right to the correct > cabal file and that I would know for sure which one it was using. > > I tried filing this as a bug on the stack github repo but they said it is a > ghc issue. > > > [1 of 1] Compiling Main ( tests/test.hs, > .stack-work/dist/x86_64-linux/Cabal-1.24.0.0/build/test/test-tmp/Main.o > ) > > /home/zkessin/Documents/*****/tests/test.hs:17:1: error: > Failed to load interface for ‘Control.Lens’ > It is a member of the hidden package ‘lens-4.14’. > Perhaps you need to add ‘lens’ to the build-depends in your .cabal file. > Use -v to see a list of the files searched for. > Progress: 1/2 > -- While building package ***********-0.1.0.0 using: > /home/zkessin/.stack/setup-exe-cache/x86_64-linux/setup-Simple-Cabal-1.24.0.0-ghc-8.0.1 > --builddir=.stack-work/dist/x86_64-linux/Cabal-1.24.0.0 build > lib:********** test:test --ghc-options " -ddump-hi -ddump-to-file" > Process exited with code: ExitFailure 1 > From rik at dcs.bbk.ac.uk Wed Oct 26 15:48:47 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Wed, 26 Oct 2016 16:48:47 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> References: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> Message-ID: Jo > Question is not whether these things are precluded, question is how you > want to tackle them. It's not even stating design goals here. The section on Types has been amended to note that these questions form a part of the ongoing work. > > The salient point of this and some other features is that they make it > easier to reason about a given program's properties, at the expense of > making programming harder. > You put that point well. > > The basic concept of subtypes is simple, but establishing a definition of > "subtype" that is both useful and sound is far from trivial. > I hope that there is nothing that I've said that could be interpreted as me thinking otherwise. As you mentioned elsewhere, though, they look too appealing to ignore. > > For example. mutable circles and mutable ellipses are not in a subtype > relationship to each other if there is an updating "scale" operation with > an x and y scaling factor (you cannot guarantee that a scaled circle stays > circular). > The design space for dealing with this is far from fully explored. > I'm not sure that the language could support mutable structures but I take your point that there are complications. > > Also, subtypes and binary operators do not really mix; google for > "parallel type hierarchy". (The core of the problem is that if you make > Byte a subtype of Word, declaring the (+) operator in Word as Word -> Word > will preclude Byte from being a subtype because you want a covariant > signature in Byte but that violates subtyping rules for functions. So you > need parametric polymorphism, but now you cannot use the simple methods for > subtyping anymore.) Clearly there is more to be done in this area. > > The key point to mention is that you want to maintain referential > integrity. > The document now mentions this. > > Sounds pretty much like the conseqences of having the IO monad in Haskell. > That seems fair to me although the broader impact on an entire program would be different I think. > > I think you should elaborate similarities and differences with how Haskell > does IO, that's a well-known standard it is going to make the paper easier > to read. Same goes for Clean&Mercury. Something like that is addressed in Related Work. Clean is already on the list but it sounds, from your comments and those of others, as if Mercury may be worth including as well. > > It's hard to tell whether it is actually new, too many details are missing. Certainly you have spotted the vagueness in the types however I think that that issue can be momentarily set aside from the point of view of novelty. The language is purely functional with respect to functions and provides out-vars as the only mechanism for dealing with IO. Let's assume for the moment that that all hangs together: if there is another language that does that, no novelty; otherwise, there is novelty. Once again, your feedback has been useful and stimulating. Many thanks! Regards Rik -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Wed Oct 26 22:33:55 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Wed, 26 Oct 2016 23:33:55 +0100 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> Message-ID: > > Let's assume for the moment that that all hangs together: if there is > another language that does that, no novelty; otherwise, there is novelty. > > Maybe not quite so clear-cut but still in the area, I think. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at cs.otago.ac.nz Wed Oct 26 23:05:45 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Thu, 27 Oct 2016 12:05:45 +1300 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> Message-ID: <836817a9-52f6-fdf6-50db-420c12f91efe@cs.otago.ac.nz> I'd like to point out that "procedural-functional" is not novel. The programming language Euclid, an "industrial strength Pascal", was sufficiently nailed down that a blue-and-white report from Xerox PARC showed that it could be viewed as a pure functional language. And I don't know if anyone ever pointed it out, but the language used in Dijkstra's "A Discipline of Programming", and in a number of papers subsequently, was constrained in the same way, to the point where that language can be seen as a sheep in wolf's clothing too. I'd like to point out something else. We are looking at the end of Moore's Law. If that end hasn't already overtaken us, it's visible in the rear view mirror and coming up fast. HP, for example, are revisiting the idea of having *LOTS* of CPUs mixed in with the memory because conventional multicore has its own problems. And the Square Kilometre Array telescope's computers are definitely going to be chock full of FPGAs as well as more conventional things like PGPUs. This means that in the foreseeable future we are going to need to learn a new style of programming because the antique style, as exemplified by say Java, just isn't going to scale. APL was once described as "the language of the future for the problems of the past". I suspect that WIP may be headed in the same direction. Disciple http://trac.ouroborus.net/ddc/ may be of interest. Quoting that page, "DDC is a research compiler used to investigate program transformation in the presence of computational effects. It compiles a family of strict functional core languages and supports region and effect. This extra information provides a handle on the operational behaviour of code that isn't available in other languages. Programs can be written in either a pure/functional or effectful/imperative style, and one of our goals is to provide both styles coherently in the same language." From mgsloan at gmail.com Thu Oct 27 00:53:31 2016 From: mgsloan at gmail.com (Michael Sloan) Date: Wed, 26 Oct 2016 17:53:31 -0700 Subject: [Haskell-cafe] Proof of concept of explanations for instance resolution Message-ID: When typeclass machinery gets complicated, it can be hard to figure interpret the meaning behind GHC's messages. In particular "Could not deduce ..." messages often reference constraints that are deep in a tree of resolving typeclasses. I think it would be great if GHC provided additional information for this circumstance. In a way what we need is a "stack trace" of what GHC was thinking about when yielding these type errors. A couple years ago, I wrote an extremely hacky approach to yielding this information through TH. It is quite imperfect, and only works with GHC 7.8. I've realized that it's highly unlikely that this project will reach the level of polish for it to be very usable in practice: https://github.com/mgsloan/explain-instance However, rather than allow that work to be wasted, I'd like to bring people's attention to the problem in general, and how we might solve it in GHC. Along with providing more information in type errors, this could also look like having a ":explain" command in ghci. Lets take Text.Printf as an example. The expression (printf "%d %d" (1 :: Int) (2 :: A) :: String) has quite a bit of machinery behind it: printf :: (PrintfType r) => String -> r class PrintfType t instance (IsChar c) => PrintfType [c] instance (a ~ ()) => PrintfType (IO a) instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) class PrintfArg a where instance PrintfArg Int where With explain-instance, all we have to do is create a module with the following in it: import ExplainInstance import Text.Printf $(explainInstance [t| PrintfType (Int -> Int -> String) |]) Then, upon running the generated main function, we get the following explanation: instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) with a ~ Int r ~ (Int -> [Char]) instance PrintfArg Int instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) with a ~ Int r ~ [Char] instance PrintfArg Int instance IsChar c => PrintfType ([c]) with c ~ Char instance IsChar Char This is the recursive tree of instance instantiation! It shows the instance head, the particular types that it has been instantiated at in a made up "with" clause. Most importantly, it shows how the instance's constraints are also satisfied, giving a tree for an explanation. The implementation of this is irrelevant, but for the curious: it involves recursively reifying all of the typeclasses, and then generating a whole new set of typeclasses. These modified versions have the same heads (renamed) as the original typeclasses, but just have one method, which yields a description of the the types it has been instantiated with, via Typeable. Well that's quite convenient! I think it can really aid in understanding typeclass machinery to be able to get this variety of trace through what GHC is thinking when satisfying a constraint. However, this is just half the problem - what about type errors? I played around with a solution to this via UndecidableInstances, where it would create a base-case instance that represents the error case. Lets say I wanted to use (printf :: String -> A -> Int -> Maybe String) where A is a type that is not an instance of PrintfArg. Another issue with this is that the result type (Maybe String) is not allowed by PrintfType. The output of $(explainInstanceError [t| PrintfType (A -> Int -> Maybe String) |]) is instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) with a ~ A r ~ (Int -> Maybe [Char]) ERROR instance PrintfArg a with a ~ A instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) with a ~ Int r ~ Maybe [Char] instance PrintfArg Int ERROR instance PrintfType t with t ~ Maybe [Char] This explains exactly where the problem is coming from in the typeclass machinery. If you're just looking at the type of printf, and see a `PrintfType` constraint, it can be a total mystery as to why GHC is complaining about some class we may or may not know about: No instance for (PrintfArg A) arising from a use of ‘printf’ Thanks for reading! I hope we can address this UI concern in the future. I hope I've contributed something by demonstrating the possibility! -Michael From jo at durchholz.org Thu Oct 27 05:17:33 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Thu, 27 Oct 2016 07:17:33 +0200 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: <836817a9-52f6-fdf6-50db-420c12f91efe@cs.otago.ac.nz> References: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> <836817a9-52f6-fdf6-50db-420c12f91efe@cs.otago.ac.nz> Message-ID: Am 27.10.2016 um 01:05 schrieb Richard A. O'Keefe: > This means that in the foreseeable future we are going to need > to learn a new style of programming because the antique style, > as exemplified by say Java, just isn't going to scale. I think you underestimate the adaptability of existing languages. Java has been preparing to move towards immutability&FP. At glacier-like speed, admittedly, but once those massively multicore systems appear, Java will be prepared to move. Haskell can claim to be already there, but wrt. how many issues have been staying unaddressed, it's no better than Java, it's just different issues. IOW this is not a predetermined race. From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Oct 27 07:56:56 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 27 Oct 2016 08:56:56 +0100 Subject: [Haskell-cafe] Proof of concept of explanations for instance resolution In-Reply-To: References: Message-ID: <20161027075656.GV4593@weber> On Wed, Oct 26, 2016 at 05:53:31PM -0700, Michael Sloan wrote: > When typeclass machinery gets complicated, it can be hard to figure interpret > the meaning behind GHC's messages. In particular "Could not deduce ..." messages > often reference constraints that are deep in a tree of resolving typeclasses. I > think it would be great if GHC provided additional information for this > circumstance. In a way what we need is a "stack trace" of what GHC was thinking > about when yielding these type errors. [...] > Thanks for reading! I hope we can address this UI concern in the future. I > hope I've contributed something by demonstrating the possibility! I think a "stack trace" of what GHC was thinking would be a great idea! I brought this idea here: https://mail.haskell.org/pipermail/haskell-cafe/2016-August/124622.html Tom From chrisdone at gmail.com Thu Oct 27 09:43:35 2016 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 27 Oct 2016 10:43:35 +0100 Subject: [Haskell-cafe] Fwd: How to best display type variables with the same name In-Reply-To: References: Message-ID: Hi all, I posted the below to ghc-devs, but they suggested perhaps Haskell-Cafe might be interested in the problem. For convenience, here's the ghc-devs archive of that discussion: https://mail.haskell.org/pipermail/ghc-devs/2016-October/013085.html ---------- Forwarded message ---------- From: Christopher Done Date: 19 October 2016 at 12:45 Subject: How to best display type variables with the same name To: "ghc-devs at haskell.org" We've encountered a problem in Intero which is that when inspecting types of expressions and patterns, sometimes it happens that the type, when pretty printing, yields variables of the same name but which have different provenance. Here's a summary of the issue: https://github.com/commercialhaskell/intero/issues/280#issuecomment-254784904 And a strawman proposal of how it could be solved: https://github.com/commercialhaskell/intero/issues/280#issuecomment-254787927 What do you think? Also, if I were to implement the strawman proposal, is it possible to recover from a `tyvar :: Type` its original quantification/its "forall"? I've had a look through the API briefly and it looks like a _maybe_. Ciao! From frantisek at farka.eu Thu Oct 27 10:54:00 2016 From: frantisek at farka.eu (=?utf-8?Q?Franti=C5=A1ek?= Farka) Date: Thu, 27 Oct 2016 11:54:00 +0100 Subject: [Haskell-cafe] CoALP-Ty'16: Call for Participation Message-ID: <20161027105400.GC25396@farka.eu> Call for Participation Workshop on Coalgebra, Horn Clause Logic Programming and Types 28-29 November 2016, Edinburgh, UK https://ff32.host.cs.st-andrews.ac.uk/coalpty16/ Abstract submission: 15 October, 2016 Registration deadline: 5 November, 2016 ==================================================== Objectives and scope ------------------- The workshop marks the end of the EPSRC Grant Coalgebraic Logic Programming for Type Inference, by K. Komendantskaya and J. Power and will consist of two parts: Part 1 - Semantics: Lawvere theories and Coalgebra in Logic and Functional Programming Part 2 - Programming languages: Horn Clause Logic for Type Inference in Functional Languages and Beyond We invite all colleagues working in related areas to present and share their results. We envisage a friendly meeting with many stimulating discussions, and therefore welcome presentations of already published research as well as novel results. Authors of original contributions will be invited to submit their papers to EPTCS post-proceedings. We especially encourage early career researchers to present and participate. Venue ----- The workshop will be held at the International Centre for Mathematical Sciences, in Edinburgh city centre, just 2 minutes walk from the Informatics Forum. Registration ------------ To register please fill in https://goo.gl/forms/KAm83p1bcNAxw0ss2 Although the registration is free it is compulsory. Please register by the 5th of November 2016. Programme --------- Monday 28 November 9:10 - 9:20 Registration 9:30 - 9:40 Welcome to CoALP-Ty'16 - Ekaterina Komendantskaya 9:40 - 11:10 Invited talk I - John Power: Logic Programming: Laxness and Saturation 11:10 - 11:30 Coffee break 11:30 - 12:00 Contributed talk - Henning Basold and Ekaterina Komendantskaya: Models of Inductive-Coinductive Logic Programs 12:00 - 13:00 Invited talk II - Steven Ramsay and Luke Ong: Refinement Types and Higher-Order Constrained Horn Clauses 13:00 - 14:00 Lunch 14:00 - 15:00 Invited talk III - Tarmo Uustalu: Comodels and Interaction 15:00 - 15:30 Coffee break 15:30 - 17:00 Contributed talks - František Farka: Proofs by Resolution and Existential Variables - Bashar Igried and Anton Setzer: Defining Trace Semantics for CSP-Agda - Clemens Kupke: Coalgebra and Ontological Rules 18:00 - 20:00 Workshop dinner - Lebanese restaurant Beirut, 24 Nicolson Square Tuesday 29 November 9:30 - 10:30 Invited talk IV - Claudio Russo: Classes for the Masses 10:30 - 11:00 Contributed talk - J. Garrett Morris: Semantical Analysis of Type Classes 11:00 - 11:30 Coffee break 11:30 - 12:30 Invited talk V - Davide Ancona: Abstract Compilation for Type Analysis of Object-Oriented Languages 12:30 - 13:30 Lunch 13:30 - 14:30 Invited talk VI - Ki Yung Ahn: Relational Specification of Type Systems Using Logic Programming 14:30 - 15:15 Discussion Panel - Horn Clause Logic: its Proof Theory, Type Theory and Category Theory — do we have the full picture yet? 15:15 - 15:45 Coffee break 15:45 - 16:45 Contributed talks - Martin Schmidt: Coalgebraic Logic Programming: Implementation and Optimization - Luca Franceschini, Davide Ancona and Ekaterina Komendantskaya: Structural Resolution for Abstract Compilation of Object-Oriented Languages Important dates --------------- Workshop registration: 5 November, 2016 Workshop: 28–29 November, 2016 Programme committee ------------------- Ki Yung Ahn, Nanyang Technological University, Singapore Davide Ancona, University of Genoa, Italy Filippo Bonchi, CNRS, ENS de Lyon, France Iavor Diatchki, Galois, Inc, USA Peng Fu, Heriot-Watt University, Edinburgh, UK Neil Ghani, University of Strathclyde, UK Patricia Johann, Appalachian State University, USA Ekaterina Komendantskaya, Heriot-Watt University, Edinburgh, UK Clemens Kupke, University of Strathclyde, UK J. Garrett Morris, University of Edinburgh, UK Fredrik Nordvall Forsberg, University of Strathclyde, UK John Power, University of Bath, UK Claudio Russo, Microsoft Research Cambridge, UK Martin Schmidt, DHBW Stuttgart and Osnabrück University , Germany Stephan Schulz, DHBW Stuttgart, Germany Aaron Stump, The University of Iowa, USA Niki Vazou, University of California, San Diego, USA Joe Wells, Heriot-Watt University, Edinburgh, UK Fabio Zanasi, Radboud University of Nijmegen, The Netherlands Workshop chairs -------- Ekaterina Komendantskaya, Heriot-Watt University, UK John Power, University of Bath, UK Publicity chair --------------- František Farka, University of Dundee, UK and University of St Andrews, UK -- František Farka From csaba.hruska at gmail.com Thu Oct 27 13:09:33 2016 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Thu, 27 Oct 2016 15:09:33 +0200 Subject: [Haskell-cafe] get Chalmers' Haskell Interpreter and Compiler source code Message-ID: Hi, I was curious and wanted to check the HBC source code but I only found a broken link (https://www.haskell.org/hbc/hbc-2004-06-29.src.tar.gz) on Haskell Wiki . Is it possible to fix the link? Or if somebody has the source code can you please share it? Thanks, Csaba -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Thu Oct 27 15:20:11 2016 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Thu, 27 Oct 2016 17:20:11 +0200 Subject: [Haskell-cafe] get Chalmers' Haskell Interpreter and Compiler source code In-Reply-To: References: Message-ID: However I've found it in another archive: https://archive.org/details/haskell-b-compiler On Thu, Oct 27, 2016 at 3:09 PM, Csaba Hruska wrote: > Hi, > > I was curious and wanted to check the HBC source code but I only found a > broken link (https://www.haskell.org/hbc/hbc-2004-06-29.src.tar.gz) on Haskell > Wiki > > . > > Is it possible to fix the link? Or if somebody has the source code can you > please share it? > > Thanks, > Csaba > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Thu Oct 27 19:41:29 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Thu, 27 Oct 2016 20:41:29 +0100 Subject: [Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29 In-Reply-To: References: Message-ID: Richard thanks for your response and the references which I'll look into, the Dijkstra sounds particularly relevant. "Procedural-functional" is not novel; and as you also mentioned, Haskell, Clean and Mercury would qualify, if you chose to look at it that way, as would other languages. Any novelty in the note would only ever be in the way that the mix is provided. You raise salient points about the sort of challenges that languages will need to confront although a search has left me still unsure about PGPUs. Can I ask you to say a bit more about programming styles: what Java can't do, what others can do, how that scales? Regards Rik On 27 October 2016 at 13:00, wrote: > Send Haskell-Cafe mailing list submissions to > haskell-cafe at haskell.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > or, via email, send a message with subject or body 'help' to > haskell-cafe-request at haskell.org > > You can reach the person managing the list at > haskell-cafe-owner at haskell.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Haskell-Cafe digest..." > > > Today's Topics: > > 1. Re: A Procedural-Functional Language (WIP) (Rik Howard) > 2. Re: A Procedural-Functional Language (WIP) (Rik Howard) > 3. Re: A Procedural-Functional Language (WIP) (Richard A. O'Keefe) > 4. Proof of concept of explanations for instance resolution > (Michael Sloan) > 5. Re: A Procedural-Functional Language (WIP) (Joachim Durchholz) > 6. Re: Proof of concept of explanations for instance resolution > (Tom Ellis) > 7. Fwd: How to best display type variables with the same name > (Christopher Done) > 8. CoALP-Ty'16: Call for Participation (František Farka) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 26 Oct 2016 16:48:47 +0100 > From: Rik Howard > To: Joachim Durchholz > Cc: haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] A Procedural-Functional Language (WIP) > Message-ID: > mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Jo > > > > Question is not whether these things are precluded, question is how you > > want to tackle them. It's not even stating design goals here. > > > The section on Types has been amended to note that these questions form a > part of the ongoing work. > > > > > > > The salient point of this and some other features is that they make it > > easier to reason about a given program's properties, at the expense of > > making programming harder. > > > > You put that point well. > > > > > > The basic concept of subtypes is simple, but establishing a definition of > > "subtype" that is both useful and sound is far from trivial. > > > > I hope that there is nothing that I've said that could be interpreted as me > thinking otherwise. As you mentioned elsewhere, though, they look too > appealing to ignore. > > > > > > For example. mutable circles and mutable ellipses are not in a subtype > > relationship to each other if there is an updating "scale" operation with > > an x and y scaling factor (you cannot guarantee that a scaled circle > stays > > circular). > > The design space for dealing with this is far from fully explored. > > > > I'm not sure that the language could support mutable structures but I take > your point that there are complications. > > > > > > Also, subtypes and binary operators do not really mix; google for > > "parallel type hierarchy". (The core of the problem is that if you make > > Byte a subtype of Word, declaring the (+) operator in Word as Word -> > Word > > will preclude Byte from being a subtype because you want a covariant > > signature in Byte but that violates subtyping rules for functions. So you > > need parametric polymorphism, but now you cannot use the simple methods > for > > subtyping anymore.) > > > Clearly there is more to be done in this area. > > > > > > The key point to mention is that you want to maintain referential > > integrity. > > > > The document now mentions this. > > > > > > Sounds pretty much like the conseqences of having the IO monad in > Haskell. > > > > That seems fair to me although the broader impact on an entire program > would be different I think. > > > > > > I think you should elaborate similarities and differences with how > Haskell > > does IO, that's a well-known standard it is going to make the paper > easier > > to read. Same goes for Clean&Mercury. > > > Something like that is addressed in Related Work. Clean is already on the > list but it sounds, from your comments and those of others, as if Mercury > may be worth including as well. > > > > > > It's hard to tell whether it is actually new, too many details are > missing. > > > Certainly you have spotted the vagueness in the types however I think that > that issue can be momentarily set aside from the point of view of novelty. > The language is purely functional with respect to functions and provides > out-vars as the only mechanism for dealing with IO. Let's assume for the > moment that that all hangs together: if there is another language that does > that, no novelty; otherwise, there is novelty. > > Once again, your feedback has been useful and stimulating. Many thanks! > > Regards > Rik > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: attachments/20161026/37a44400/attachment-0001.html> > > ------------------------------ > > Message: 2 > Date: Wed, 26 Oct 2016 23:33:55 +0100 > From: Rik Howard > To: Joachim Durchholz > Cc: haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] A Procedural-Functional Language (WIP) > Message-ID: > mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > > > > > Let's assume for the moment that that all hangs together: if there is > > another language that does that, no novelty; otherwise, there is novelty. > > > > > Maybe not quite so clear-cut but still in the area, I think. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: attachments/20161026/0afb35e7/attachment-0001.html> > > ------------------------------ > > Message: 3 > Date: Thu, 27 Oct 2016 12:05:45 +1300 > From: "Richard A. O'Keefe" > To: > Subject: Re: [Haskell-cafe] A Procedural-Functional Language (WIP) > Message-ID: <836817a9-52f6-fdf6-50db-420c12f91efe at cs.otago.ac.nz> > Content-Type: text/plain; charset="utf-8"; format=flowed > > I'd like to point out that "procedural-functional" is not novel. > The programming language Euclid, an "industrial strength Pascal", > was sufficiently nailed down that a blue-and-white report from > Xerox PARC showed that it could be viewed as a pure functional > language. And I don't know if anyone ever pointed it out, but > the language used in Dijkstra's "A Discipline of Programming", > and in a number of papers subsequently, was constrained in the > same way, to the point where that language can be seen as a > sheep in wolf's clothing too. > > I'd like to point out something else. We are looking at the > end of Moore's Law. If that end hasn't already overtaken us, > it's visible in the rear view mirror and coming up fast. HP, > for example, are revisiting the idea of having *LOTS* of CPUs > mixed in with the memory because conventional multicore has > its own problems. And the Square Kilometre Array telescope's > computers are definitely going to be chock full of FPGAs as > well as more conventional things like PGPUs. > > This means that in the foreseeable future we are going to need > to learn a new style of programming because the antique style, > as exemplified by say Java, just isn't going to scale. > > APL was once described as "the language of the future for the > problems of the past". I suspect that WIP may be headed in > the same direction. > > Disciple http://trac.ouroborus.net/ddc/ may be of interest. > Quoting that page, > "DDC is a research compiler used to investigate program transformation > in the presence of computational effects. It compiles a family of strict > functional core languages and supports region and effect. This extra > information provides a handle on the operational behaviour of code that > isn't available in other languages. Programs can be written in either a > pure/functional or effectful/imperative style, and one of our goals is > to provide both styles coherently in the same language." > > > > ------------------------------ > > Message: 4 > Date: Wed, 26 Oct 2016 17:53:31 -0700 > From: Michael Sloan > To: haskell-cafe > Subject: [Haskell-cafe] Proof of concept of explanations for instance > resolution > Message-ID: > gmail.com> > Content-Type: text/plain; charset=UTF-8 > > When typeclass machinery gets complicated, it can be hard to figure > interpret > the meaning behind GHC's messages. In particular "Could not deduce ..." > messages > often reference constraints that are deep in a tree of resolving > typeclasses. I > think it would be great if GHC provided additional information for this > circumstance. In a way what we need is a "stack trace" of what GHC was > thinking > about when yielding these type errors. > > A couple years ago, I wrote an extremely hacky approach to yielding this > information through TH. It is quite imperfect, and only works with GHC > 7.8. I've > realized that it's highly unlikely that this project will reach the level > of > polish for it to be very usable in practice: > https://github.com/mgsloan/explain-instance > > However, rather than allow that work to be wasted, I'd like to bring > people's > attention to the problem in general, and how we might solve it in GHC. > Along > with providing more information in type errors, this could also look like > having > a ":explain" command in ghci. > > Lets take Text.Printf as an example. The expression (printf "%d %d" (1 :: > Int) > (2 :: A) :: String) has quite a bit of machinery behind it: > > printf :: (PrintfType r) => String -> r > > class PrintfType t > instance (IsChar c) => PrintfType [c] > instance (a ~ ()) => PrintfType (IO a) > instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) > > class PrintfArg a where > instance PrintfArg Int where > > With explain-instance, all we have to do is create a module with the > following > in it: > > import ExplainInstance > import Text.Printf > $(explainInstance [t| PrintfType (Int -> Int -> String) |]) > > Then, upon running the generated main function, we get the following > explanation: > > instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) > with a ~ Int > r ~ (Int -> [Char]) > > instance PrintfArg Int > > instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) > with a ~ Int > r ~ [Char] > > instance PrintfArg Int > > instance IsChar c => PrintfType ([c]) > with c ~ Char > > instance IsChar Char > > This is the recursive tree of instance instantiation! It shows the instance > head, the particular types that it has been instantiated at in a made up > "with" > clause. Most importantly, it shows how the instance's constraints are also > satisfied, giving a tree for an explanation. > > The implementation of this is irrelevant, but for the curious: it involves > recursively reifying all of the typeclasses, and then generating a whole > new set > of typeclasses. These modified versions have the same heads (renamed) as > the > original typeclasses, but just have one method, which yields a description > of > the the types it has been instantiated with, via Typeable. > > Well that's quite convenient! I think it can really aid in understanding > typeclass machinery to be able to get this variety of trace through what > GHC is > thinking when satisfying a constraint. However, this is just half the > problem - > what about type errors? I played around with a solution to this via > UndecidableInstances, where it would create a base-case instance that > represents > the error case. > > Lets say I wanted to use (printf :: String -> A -> Int -> Maybe String) > where A > is a type that is not an instance of PrintfArg. Another issue with this is > that > the result type (Maybe String) is not allowed by PrintfType. > > The output of > > $(explainInstanceError [t| PrintfType (A -> Int -> Maybe String) |]) > > is > > instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) > with a ~ A > r ~ (Int -> Maybe [Char]) > > ERROR instance PrintfArg a > with a ~ A > > instance (PrintfArg a, PrintfType r) => PrintfType (a -> r) > with a ~ Int > r ~ Maybe [Char] > > instance PrintfArg Int > > ERROR instance PrintfType t > with t ~ Maybe [Char] > > This explains exactly where the problem is coming from in the typeclass > machinery. If you're just looking at the type of printf, and see a > `PrintfType` > constraint, it can be a total mystery as to why GHC is complaining about > some > class we may or may not know about: > > No instance for (PrintfArg A) arising from a use of ‘printf’ > > Thanks for reading! I hope we can address this UI concern in the future. > I > hope I've contributed something by demonstrating the possibility! > > -Michael > > > ------------------------------ > > Message: 5 > Date: Thu, 27 Oct 2016 07:17:33 +0200 > From: Joachim Durchholz > To: haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] A Procedural-Functional Language (WIP) > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Am 27.10.2016 um 01:05 schrieb Richard A. O'Keefe: > > This means that in the foreseeable future we are going to need > > to learn a new style of programming because the antique style, > > as exemplified by say Java, just isn't going to scale. > > I think you underestimate the adaptability of existing languages. > > Java has been preparing to move towards immutability&FP. At glacier-like > speed, admittedly, but once those massively multicore systems appear, > Java will be prepared to move. > > Haskell can claim to be already there, but wrt. how many issues have > been staying unaddressed, it's no better than Java, it's just different > issues. > IOW this is not a predetermined race. > > > ------------------------------ > > Message: 6 > Date: Thu, 27 Oct 2016 08:56:56 +0100 > From: Tom Ellis > To: haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] Proof of concept of explanations for > instance resolution > Message-ID: <20161027075656.GV4593 at weber> > Content-Type: text/plain; charset=us-ascii > > On Wed, Oct 26, 2016 at 05:53:31PM -0700, Michael Sloan wrote: > > When typeclass machinery gets complicated, it can be hard to figure > interpret > > the meaning behind GHC's messages. In particular "Could not deduce ..." > messages > > often reference constraints that are deep in a tree of resolving > typeclasses. I > > think it would be great if GHC provided additional information for this > > circumstance. In a way what we need is a "stack trace" of what GHC was > thinking > > about when yielding these type errors. > [...] > > Thanks for reading! I hope we can address this UI concern in the > future. I > > hope I've contributed something by demonstrating the possibility! > > I think a "stack trace" of what GHC was thinking would be a great idea! I > brought this idea here: > > https://mail.haskell.org/pipermail/haskell-cafe/2016- > August/124622.html > > Tom > > > ------------------------------ > > Message: 7 > Date: Thu, 27 Oct 2016 10:43:35 +0100 > From: Christopher Done > To: Haskell Cafe > Subject: [Haskell-cafe] Fwd: How to best display type variables with > the same name > Message-ID: > gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Hi all, > > I posted the below to ghc-devs, but they suggested perhaps > Haskell-Cafe might be interested in the problem. For convenience, > here's the ghc-devs archive of that discussion: > https://mail.haskell.org/pipermail/ghc-devs/2016-October/013085.html > > ---------- Forwarded message ---------- > From: Christopher Done > Date: 19 October 2016 at 12:45 > Subject: How to best display type variables with the same name > To: "ghc-devs at haskell.org" > > > We've encountered a problem in Intero which is that when inspecting > types of expressions and patterns, sometimes it happens that the type, > when pretty printing, yields variables of the same name but which have > different provenance. > > Here's a summary of the issue: > > https://github.com/commercialhaskell/intero/issues/280#issuecomment- > 254784904 > > And a strawman proposal of how it could be solved: > > https://github.com/commercialhaskell/intero/issues/280#issuecomment- > 254787927 > > What do you think? > > Also, if I were to implement the strawman proposal, is it possible to > recover from a `tyvar :: Type` its original quantification/its > "forall"? I've had a look through the API briefly and it looks like a > _maybe_. > > Ciao! > > > ------------------------------ > > Message: 8 > Date: Thu, 27 Oct 2016 11:54:00 +0100 > From: František Farka > To: haskell at haskell.org, haskell-cafe at haskell.org > Subject: [Haskell-cafe] CoALP-Ty'16: Call for Participation > Message-ID: <20161027105400.GC25396 at farka.eu> > Content-Type: text/plain; charset=utf-8 > > Call for Participation > > Workshop on > Coalgebra, Horn Clause Logic Programming and Types > > 28-29 November 2016, Edinburgh, UK > https://ff32.host.cs.st-andrews.ac.uk/coalpty16/ > > > Abstract submission: 15 October, 2016 > Registration deadline: 5 November, 2016 > > ==================================================== > > Objectives and scope > ------------------- > > The workshop marks the end of the EPSRC Grant Coalgebraic Logic > Programming for > Type Inference, by K. Komendantskaya and J. Power and will consist of two > parts: > > Part 1 - Semantics: Lawvere theories and Coalgebra in Logic and > Functional Programming > > Part 2 - Programming languages: Horn Clause Logic for Type > Inference in > Functional Languages and Beyond > > We invite all colleagues working in related areas to present and share > their > results. We envisage a friendly meeting with many stimulating discussions, > and > therefore welcome presentations of already published research as well as > novel > results. Authors of original contributions will be invited to submit their > papers to EPTCS post-proceedings. We especially encourage early career > researchers to present and participate. > > Venue > ----- > > The workshop will be held at the International Centre for Mathematical > Sciences, > in Edinburgh city centre, just 2 minutes walk from the Informatics Forum. > > Registration > ------------ > > To register please fill in https://goo.gl/forms/KAm83p1bcNAxw0ss2 > Although the registration is free it is compulsory. Please register > by the 5th of November 2016. > > > Programme > --------- > > Monday 28 November > > 9:10 - 9:20 Registration > 9:30 - 9:40 Welcome to CoALP-Ty'16 > - Ekaterina Komendantskaya > 9:40 - 11:10 Invited talk I > - John Power: Logic Programming: Laxness and Saturation > 11:10 - 11:30 Coffee break > 11:30 - 12:00 Contributed talk > - Henning Basold and Ekaterina Komendantskaya: Models of > Inductive-Coinductive Logic Programs > 12:00 - 13:00 Invited talk II > - Steven Ramsay and Luke Ong: Refinement Types and > Higher-Order > Constrained Horn Clauses > 13:00 - 14:00 Lunch > 14:00 - 15:00 Invited talk III > - Tarmo Uustalu: Comodels and Interaction > 15:00 - 15:30 Coffee break > 15:30 - 17:00 Contributed talks > - František Farka: Proofs by Resolution and Existential > Variables > - Bashar Igried and Anton Setzer: Defining Trace Semantics > for > CSP-Agda > - Clemens Kupke: Coalgebra and Ontological Rules > 18:00 - 20:00 Workshop dinner > - Lebanese restaurant Beirut, 24 Nicolson Square > > > Tuesday 29 November > > 9:30 - 10:30 Invited talk IV > - Claudio Russo: Classes for the Masses > 10:30 - 11:00 Contributed talk > - J. Garrett Morris: Semantical Analysis of Type Classes > 11:00 - 11:30 Coffee break > 11:30 - 12:30 Invited talk V > - Davide Ancona: Abstract Compilation for Type Analysis of > Object-Oriented Languages > 12:30 - 13:30 Lunch > 13:30 - 14:30 Invited talk VI > - Ki Yung Ahn: Relational Specification of Type Systems > Using > Logic Programming > 14:30 - 15:15 Discussion Panel > - Horn Clause Logic: its Proof Theory, Type Theory and > Category > Theory — do we have the full picture yet? > 15:15 - 15:45 Coffee break > 15:45 - 16:45 Contributed talks > - Martin Schmidt: Coalgebraic Logic Programming: > Implementation > and Optimization > - Luca Franceschini, Davide Ancona and Ekaterina > Komendantskaya: > Structural Resolution for Abstract Compilation of > Object-Oriented Languages > > > Important dates > --------------- > > Workshop registration: 5 November, 2016 > Workshop: 28–29 November, 2016 > > > Programme committee > ------------------- > > Ki Yung Ahn, Nanyang Technological University, Singapore > Davide Ancona, University of Genoa, Italy > Filippo Bonchi, CNRS, ENS de Lyon, France > Iavor Diatchki, Galois, Inc, USA > Peng Fu, Heriot-Watt University, Edinburgh, UK > Neil Ghani, University of Strathclyde, UK > Patricia Johann, Appalachian State University, USA > Ekaterina Komendantskaya, Heriot-Watt University, Edinburgh, UK > Clemens Kupke, University of Strathclyde, UK > J. Garrett Morris, University of Edinburgh, UK > Fredrik Nordvall Forsberg, University of Strathclyde, UK > John Power, University of Bath, UK > Claudio Russo, Microsoft Research Cambridge, UK > Martin Schmidt, DHBW Stuttgart and Osnabrück University , Germany > Stephan Schulz, DHBW Stuttgart, Germany > Aaron Stump, The University of Iowa, USA > Niki Vazou, University of California, San Diego, USA > Joe Wells, Heriot-Watt University, Edinburgh, UK > Fabio Zanasi, Radboud University of Nijmegen, The Netherlands > > > Workshop chairs > -------- > Ekaterina Komendantskaya, Heriot-Watt University, UK > John Power, University of Bath, UK > > > Publicity chair > --------------- > František Farka, University of Dundee, UK and University of St Andrews, UK > > -- > > František Farka > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > ------------------------------ > > End of Haskell-Cafe Digest, Vol 158, Issue 29 > ********************************************* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at cs.otago.ac.nz Fri Oct 28 00:38:47 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Fri, 28 Oct 2016 13:38:47 +1300 Subject: [Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29 In-Reply-To: References: Message-ID: <36384d09-9d23-265c-5f4e-e379cb6a0bb7@cs.otago.ac.nz> On 28/10/16 8:41 AM, Rik Howard wrote: > Any novelty in the note would only ever be in the way that the mix is > provided. You raise salient points about the sort of challenges that > languages will need to confront although a search has left me still > unsure about PGPUs. Can I ask you to say a bit more about programming > styles: what Java can't do, what others can do, how that scales? The fundamental issue is that Java is very much an imperative language (although books on concurrent programming in Java tend to strongly recommending immutable data structures whenever practical, because they are safer to share). The basic computational model of (even concurrent) imperative languages is the RAM: there is a set of threads living in a single address space where all memory is equally and as easily accessible to all threads. Already that's not true. One of the machines sitting on my desk is a Parallela: 2 ARM cores, 16 RISC cores, there's a single address space shared by the RISC cores but each of them "owns" a chunk of it and access is not uniform. Getting information between the ARM cores and the RISC cores is not trivial. Indeed, one programming model for the Parallela is OpenCL 1.1, although as they note, "Creating an API for architectures not even considered during the creation of a standard is challenging. This can be seen in the case of Epiphany, which possesses an architecture very different from a GPU, and which supports functionality not yet supported by a GPU. OpenCL as an API for Epiphany is good, but not perfect." The thing is that the Epiphany chip is more *like* a GPU than it is like anything say Java might want to run on. For that matter, there is the IBM "Cell" processor, basically a Power core and a bunch of RISCish cores, not entirely unlike the Epiphany. As the Wikipedia page on the Cell notes, "Cell is widely regarded as a challenging environment for software development". Again, Java wants a (1) large (2) flat (3) shared address space, and that's *not* what Cell delivers. The memory space available to each "SPE" in a Cell is effectively what would have been L1 cache on a more conventional machine, and transfers between that and main memory are non-trivial. So Cell memory is (1) small (2) heterogeneous and (3) partitioned. The Science Data Processor for the Square Kilometre Array is still being designed. As far as I know, they haven't committed to a CPU architecture yet, and they probably want to leave that pretty late. Cell might be a candidate, but I suspect they'll not want to spend much of their software development budget on a "challenging" architecture. Hmm. Scaling. Here's the issue. It looks as though the future of scaling is *lots* of processors, running *slower* than typical desktops, with things turned down or off as much as possible, so you won't be able to pull the Parallela/Epiphany trick of always being able to access another chip's local memory. Any programming model that relies on large flat shared address spaces is out; message passing that copies stuff is going to be much easier to manage than passing a pointer to memory that might be powered off when you need it; anything that creates tight coupling between the execution orders of separate processors is going to be a nightmare. We're also looking at more things moving into special-purpose hardware, in order to reduce power costs. It would be nice to be able to do this without a complete rewrite... Coarray Fortran (in the current standard) is an attempt to deal with the kinds of machines I'm talking about. Whether it's a good attempt I couldn't say, I'm still trying to get my head around it. (More precisely, I think I understand what it's about, but I haven't a clue about how to *use* the feature effectively.) There are people at Rice who think it could be better. Reverting to the subject of declarative/procedural, I recently came across Lee Naish's "Pawns" language. Still very much a prototype, and he is interested in the semantics, not the syntax. https://github.com/lee-naish/Pawns http://people.eng.unimelb.edu.au/lee/papers/pawns/ From ok at cs.otago.ac.nz Fri Oct 28 00:49:32 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Fri, 28 Oct 2016 13:49:32 +1300 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: References: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> <836817a9-52f6-fdf6-50db-420c12f91efe@cs.otago.ac.nz> Message-ID: <18468bb0-79fe-6cfc-43a2-9a9e2aa70803@cs.otago.ac.nz> On 27/10/16 6:17 PM, Joachim Durchholz wrote: > Am 27.10.2016 um 01:05 schrieb Richard A. O'Keefe: >> This means that in the foreseeable future we are going to need >> to learn a new style of programming because the antique style, >> as exemplified by say Java, just isn't going to scale. > > I think you underestimate the adaptability of existing languages. Well, I don't think so. > > Java has been preparing to move towards immutability&FP. At glacier-like > speed, admittedly, but once those massively multicore systems appear, > Java will be prepared to move. Nobody ever said Java (or any other language) can't ADD things. The problem is that Java can't REMOVE the things that get in the way without ceasing to be Java. It's just like the way you can ADD things (complex arithmetic in C99, threads in C11) to C, but you can't REMOVE the things that make C horribly dangerous without it ceasing to be C (and thereby ceasing to be useful in its admitted niche). The fundamental operation in Java is the assignment statement. It is fundamental to the Java Memory Model that when optimising memory references the compiler is explicitly allowed to pretend that threading doesn't exist. If you fix those issues, you don't have Java any more. > Haskell can claim to be already there, but wrt. how many issues have > been staying unaddressed, it's no better than Java, it's just different > issues. > IOW this is not a predetermined race. Nobody ever said it was. To be honest, I don't think ANY existing language will survive unscathed. I really wasn't talking about a race, simply making the point that we need new ideas, not just a rehash of the old ones. A very simple point: the more cores are running at once, the sooner your program will run into trouble, if it runs into trouble at all. And the more cores are running at once, the nastier it gets for a human being trying to debug the code. So we're looking for a language that can give us strong guarantees that certain kinds of mistakes either CAN'T happen because the language cannot express them or WON'T happen because it can verify that your particular program doesn't do those bad things. From jo at durchholz.org Fri Oct 28 05:11:19 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Fri, 28 Oct 2016 07:11:19 +0200 Subject: [Haskell-cafe] A Procedural-Functional Language (WIP) In-Reply-To: <18468bb0-79fe-6cfc-43a2-9a9e2aa70803@cs.otago.ac.nz> References: <50a704b8-14b7-ca49-7093-143dbe9bbd86@durchholz.org> <836817a9-52f6-fdf6-50db-420c12f91efe@cs.otago.ac.nz> <18468bb0-79fe-6cfc-43a2-9a9e2aa70803@cs.otago.ac.nz> Message-ID: <25dfbaad-dfdc-72d1-7cd2-13fe1678f2a0@durchholz.org> Am 28.10.2016 um 02:49 schrieb Richard A. O'Keefe: > > Nobody ever said Java (or any other language) can't ADD things. > The problem is that Java can't REMOVE the things that get in the > way without ceasing to be Java. Sure. (Minor nitpick: Languages can change their nature by adding things, too.) > It's just like the way you can ADD things (complex arithmetic in C99, > threads in C11) to C, but you can't REMOVE the things that make C > horribly dangerous without it ceasing to be C (and thereby ceasing to > be useful in its admitted niche). Sure, but still, it's a lot more grey area than you say - the dangerous things in C++ are still there but the language became much less dangerous because more modern versions come with other constructs so you are not forced to use the horribly dangerous stuff anymore. > The fundamental operation in Java is the assignment statement. > It is fundamental to the Java Memory Model that when optimising > memory references the compiler is explicitly allowed to pretend > that threading doesn't exist. > > If you fix those issues, you don't have Java any more. Value types would fix those issues without making it non-Java. There have been multiple projects to get them into the language, so the knowledge and interest is there, multicore is just not prevalent enough to make Oracle recognize their relevance and putting their inclusion high on the to-do list for the next language version. Aliasing cannot be fixed in C++ because its constness annotations are too weakly enforced to be useful to an optimizer. In Java, this could be pretty different because you can't reinterpret_cast things unless you copy them to a byte buffer before, so the compiler does have all the guarantees. >> Haskell can claim to be already there, but wrt. how many issues have >> been staying unaddressed, it's no better than Java, it's just different >> issues. >> IOW this is not a predetermined race. > > Nobody ever said it was. A certain smugness in a previous post implied something in that direction. At least there's the idea that Haskell is in a better position than most languages to adapt to that situation; I am sceptical, not because Haskell is a bad language (I'd LOVE to code in Haskell) but because it is missing some key elements to make it production-read for general use, so it's not even going to enter the race. (Some people are happy with that situation, which I think is pretty selfish.) > To be honest, I don't think ANY existing > language will survive unscathed. I really wasn't talking about a > race, simply making the point that we need new ideas, not just a > rehash of the old ones. New ideas and testbeds to see which of them hold up in practice. > A very simple point: the more cores are running at once, the > sooner your program will run into trouble, if it runs into trouble > at all. And the more cores are running at once, the nastier it > gets for a human being trying to debug the code. Actually imperative languages are slowly coming to grips with that. E.g. updatable data structures have generally fallen out of favor for interthread communication, which has removed 90% of race conditions. The rest is more on the level of specification problems. However, I am not aware of Haskell helping with multithreading once IO comes into play - does it? > So we're looking > for a language that can give us strong guarantees that certain > kinds of mistakes either CAN'T happen because the language cannot > express them or WON'T happen because it can verify that your > particular program doesn't do those bad things. I do have some ideas, but not even a proof of concept so it's much too early to talk much about that. From csaba.hruska at gmail.com Fri Oct 28 07:32:33 2016 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Fri, 28 Oct 2016 09:32:33 +0200 Subject: [Haskell-cafe] get Chalmers' Haskell Interpreter and Compiler source code In-Reply-To: References: Message-ID: As an extra does anyone have the modified/extended HBC source with Urban Boquist's GRIN backend? On Thu, Oct 27, 2016 at 5:20 PM, Csaba Hruska wrote: > However I've found it in another archive: > https://archive.org/details/haskell-b-compiler > > On Thu, Oct 27, 2016 at 3:09 PM, Csaba Hruska > wrote: > >> Hi, >> >> I was curious and wanted to check the HBC source code but I only found a >> broken link (https://www.haskell.org/hbc/hbc-2004-06-29.src.tar.gz) on Haskell >> Wiki >> >> . >> >> Is it possible to fix the link? Or if somebody has the source code can >> you please share it? >> >> Thanks, >> Csaba >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Oct 28 08:38:57 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 28 Oct 2016 09:38:57 +0100 Subject: [Haskell-cafe] Haddock, wrapping contexts? Message-ID: <20161028083857.GC4593@weber> Some of my type signatures have long contexts and Haddock doesn't wrap them, pushing the more interesting information off the right hand side, e.g.: https://hackage.haskell.org/package/opaleye-0.5.1.1/docs/Opaleye-Join.html Is there any way I can convince Haddock to wrap these contexts? Ideally I want to see something like leftJoin :: (Default Unpackspec columnsA columnsA, Default Unpackspec columnsB columnsB, Default NullMaker columnsB nullableColumnsB) => Query columnsA -> Query columnsB -> ((columnsA, columnsB) -> Column PGBool) -> Query (columnsA, nullableColumnsB) Thanks, Tom From heraldhoi at gmail.com Fri Oct 28 08:53:53 2016 From: heraldhoi at gmail.com (Geraldus) Date: Fri, 28 Oct 2016 08:53:53 +0000 Subject: [Haskell-cafe] Haddock, wrapping contexts? In-Reply-To: <20161028083857.GC4593@weber> References: <20161028083857.GC4593@weber> Message-ID: I see `leftJoin` signature wrapped, but not `leftJoinExplicit` for example. I suppose if you will document arguments Haddock will wrap that signatures too. [image: Снимок экрана 2016-10-28 в 13.53.25.png] пт, 28 окт. 2016 г. в 13:39, Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk>: > Some of my type signatures have long contexts and Haddock doesn't wrap > them, > pushing the more interesting information off the right hand side, e.g.: > > > https://hackage.haskell.org/package/opaleye-0.5.1.1/docs/Opaleye-Join.html > > Is there any way I can convince Haddock to wrap these contexts? Ideally I > want to see something like > > leftJoin > :: (Default Unpackspec columnsA columnsA, > Default Unpackspec columnsB columnsB, > Default NullMaker columnsB nullableColumnsB) > => Query columnsA > -> Query columnsB > -> ((columnsA, columnsB) -> Column PGBool) > -> Query (columnsA, nullableColumnsB) > > Thanks, > > Tom > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Снимок экрана 2016-10-28 в 13.53.25.png Type: image/png Size: 250165 bytes Desc: not available URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Oct 28 09:03:09 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 28 Oct 2016 10:03:09 +0100 Subject: [Haskell-cafe] Haddock, wrapping contexts? In-Reply-To: References: <20161028083857.GC4593@weber> Message-ID: <20161028090309.GF4593@weber> On Fri, Oct 28, 2016 at 08:53:53AM +0000, Geraldus wrote: > I see `leftJoin` signature wrapped, but not `leftJoinExplicit` for > example. I suppose if you will document arguments Haddock will wrap that > signatures too. No, I want to wrap the *context* of leftJoin. Currently all three elements of the context tuple are on one line: (Default Unpackspec columnsL columnsL, Default Unpackspec columnsR columnsR, Default NullMaker columnsR nullableColumnsR) I want to see (Default Unpackspec columnsL columnsL, Default Unpackspec columnsR columnsR, Default NullMaker columnsR nullableColumnsR) Tom From heraldhoi at gmail.com Fri Oct 28 09:08:27 2016 From: heraldhoi at gmail.com (Geraldus) Date: Fri, 28 Oct 2016 09:08:27 +0000 Subject: [Haskell-cafe] Haddock, wrapping contexts? In-Reply-To: <20161028090309.GF4593@weber> References: <20161028083857.GC4593@weber> <20161028090309.GF4593@weber> Message-ID: Ah, sorry, got it. пт, 28 окт. 2016 г. в 14:03, Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk>: > On Fri, Oct 28, 2016 at 08:53:53AM +0000, Geraldus wrote: > > I see `leftJoin` signature wrapped, but not `leftJoinExplicit` for > > example. I suppose if you will document arguments Haddock will wrap that > > signatures too. > > No, I want to wrap the *context* of leftJoin. Currently all three > elements of > the context tuple are on one line: > > (Default Unpackspec columnsL columnsL, Default Unpackspec columnsR > columnsR, Default NullMaker columnsR nullableColumnsR) > > I want to see > > (Default Unpackspec columnsL columnsL, > Default Unpackspec columnsR columnsR, > Default NullMaker columnsR nullableColumnsR) > > Tom > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From monkleyon at googlemail.com Fri Oct 28 17:31:54 2016 From: monkleyon at googlemail.com (MarLinn) Date: Fri, 28 Oct 2016 19:31:54 +0200 Subject: [Haskell-cafe] Haddock, wrapping contexts? In-Reply-To: <20161028083857.GC4593@weber> References: <20161028083857.GC4593@weber> Message-ID: <9378721c-f453-17ad-1456-41041469db57@gmail.com> > Is there any way I can convince Haddock to wrap these contexts? Ideally I > want to see something like > > leftJoin > :: (Default Unpackspec columnsA columnsA, > Default Unpackspec columnsB columnsB, > Default NullMaker columnsB nullableColumnsB) > => [..] This is a bit silly and doesn't directly address the problem, but maybe you could re-package some of the Constraints into a type synonym? (I think you would need some extension like -XConstraintKinds for that) Like this: type DefaultUnpack cols = Default Unpackspec cols cols type DefaultNull cols nCols = Default NullMaker cols nCols leftJoin :: (DefaultUnpack columnsA, DefaultUnpack columnsB, DefaultNull columnsB nullableColumnsB) => [..] Cheers, MarLinn From simon.jakobi at googlemail.com Fri Oct 28 17:41:17 2016 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Fri, 28 Oct 2016 19:41:17 +0200 Subject: [Haskell-cafe] Haddock, wrapping contexts? In-Reply-To: <20161028083857.GC4593@weber> References: <20161028083857.GC4593@weber> Message-ID: I have opened an issue with the same request a while ago: https://github.com/haskell/haddock/issues/472 2016-10-28 10:38 GMT+02:00 Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk>: > Some of my type signatures have long contexts and Haddock doesn't wrap > them, > pushing the more interesting information off the right hand side, e.g.: > > https://hackage.haskell.org/package/opaleye-0.5.1.1/docs/ > Opaleye-Join.html > > Is there any way I can convince Haddock to wrap these contexts? Ideally I > want to see something like > > leftJoin > :: (Default Unpackspec columnsA columnsA, > Default Unpackspec columnsB columnsB, > Default NullMaker columnsB nullableColumnsB) > => Query columnsA > -> Query columnsB > -> ((columnsA, columnsB) -> Column PGBool) > -> Query (columnsA, nullableColumnsB) > > Thanks, > > Tom > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Oct 28 21:55:42 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 28 Oct 2016 22:55:42 +0100 Subject: [Haskell-cafe] Haddock, wrapping contexts? In-Reply-To: References: <20161028083857.GC4593@weber> Message-ID: <20161028215542.GH4593@weber> Thanks! That's really helpful. On Fri, Oct 28, 2016 at 07:41:17PM +0200, Simon Jakobi via Haskell-Cafe wrote: > I have opened an issue with the same request a while ago: > https://github.com/haskell/haddock/issues/472 > > 2016-10-28 10:38 GMT+02:00 Tom Ellis < > tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk>: > > > Some of my type signatures have long contexts and Haddock doesn't wrap > > them, > > pushing the more interesting information off the right hand side, e.g.: > > > > https://hackage.haskell.org/package/opaleye-0.5.1.1/docs/ > > Opaleye-Join.html > > > > Is there any way I can convince Haddock to wrap these contexts? Ideally I > > want to see something like > > > > leftJoin > > :: (Default Unpackspec columnsA columnsA, > > Default Unpackspec columnsB columnsB, > > Default NullMaker columnsB nullableColumnsB) > > => Query columnsA > > -> Query columnsB > > -> ((columnsA, columnsB) -> Column PGBool) > > -> Query (columnsA, nullableColumnsB) > > > > Thanks, > > > > Tom From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Oct 28 21:56:17 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 28 Oct 2016 22:56:17 +0100 Subject: [Haskell-cafe] Haddock, wrapping contexts? In-Reply-To: <9378721c-f453-17ad-1456-41041469db57@gmail.com> References: <20161028083857.GC4593@weber> <9378721c-f453-17ad-1456-41041469db57@gmail.com> Message-ID: <20161028215617.GI4593@weber> On Fri, Oct 28, 2016 at 07:31:54PM +0200, MarLinn via Haskell-Cafe wrote: > >Is there any way I can convince Haddock to wrap these contexts? Ideally I > >want to see something like > > > > leftJoin > > :: (Default Unpackspec columnsA columnsA, > > Default Unpackspec columnsB columnsB, > > Default NullMaker columnsB nullableColumnsB) > > => [..] > > This is a bit silly and doesn't directly address the problem, but > maybe you could re-package some of the Constraints into a type > synonym? (I think you would need some extension like > -XConstraintKinds for that) > > Like this: > > type DefaultUnpack cols = Default Unpackspec cols cols > type DefaultNull cols nCols = Default NullMaker cols nCols > > leftJoin :: (DefaultUnpack columnsA, DefaultUnpack columnsB, DefaultNull columnsB nullableColumnsB) > => [..] Thanks MarLinn, Your suggestion is a good one but I'd rather see if the tooling can be fixed. Tom From clintonmead at gmail.com Sat Oct 29 00:54:20 2016 From: clintonmead at gmail.com (Clinton Mead) Date: Sat, 29 Oct 2016 11:54:20 +1100 Subject: [Haskell-cafe] Why doesn't GHC derive these types Message-ID: Consider the following program: {-# LANGUAGE TypeFamilyDependencies #-} data D x type family F t = s | s -> t type instance F (D t) = D (F t) f :: F s -> () f _ = () g :: D (F t) -> () g x = f x main = return () The problem seems to be the call from "g" to "f". We're calling "f" with an argument of type "D (F t)". "f" then has to determine what "s" is in it's signature. We know: 1. "F s ~ D (F t)" (from function call) 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type definition) Therefore we should be able to derive: 3. "F s ~ F (D t)" (type equality is transitive) 4. "s ~ D t" (as F is injective) I suspect the part we're missing in GHC is step 4. I recall reading this somewhere but I can't find where now. However, the paper about injective types says that this style of inference, namely "F a ~ F b => a ~ b" should occur. I quote ( https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/injective-type-families-acm.pdf section 5.1 p125): So, faced with the constraint F α ∼ F β, the inference engine does not in > general unify α := β; so the constraint F α ∼ F β is not solved, and hence > f (g 3) will be rejected. But if we knew that F was injective, we can unify > α := β without guessing. > Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process > that adds extra "derived" equality constraints that may make some extra > unifications apparent, thus allowing inference to proceed further without > having to make guesses. In the case of an injective F, improvement adds α ∼ > β, which the constraint solver can solve by unification. In general, > improvement of wanted constraint is extremely simple: > Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F τ > , add the derived wanted constraint σn ∼ τn for each n-injective argument > of F. > Why is this OK? Because if it is possible to prove the original constraint > F σ ∼ F τ , then (by Definition 1) we will also have a proof of σn ∼ τn. So > adding σn ∼ τn as a new wanted constraint does not constrain the solution > space. Why is it beneficial? Because, as we have seen, it may expose > additional guess-free unification opportunities that that solver can > exploit. Am I correct in my assessment of what is happening here with GHC? Is there anyway to get it to compile this program, perhaps with an extension? -------------- next part -------------- An HTML attachment was scrubbed... URL: From clintonmead at gmail.com Sat Oct 29 01:26:08 2016 From: clintonmead at gmail.com (Clinton Mead) Date: Sat, 29 Oct 2016 12:26:08 +1100 Subject: [Haskell-cafe] Why doesn't GHC derive these types In-Reply-To: References: Message-ID: Also is there a type-checker plugin that helps with this? If not, would it be possible to write one, or was there some intentional reason why this inference was not included? On Sat, Oct 29, 2016 at 11:54 AM, Clinton Mead wrote: > Consider the following program: > > {-# LANGUAGE TypeFamilyDependencies #-} > > data D x > > type family F t = s | s -> t > type instance F (D t) = D (F t) > > f :: F s -> () > f _ = () > > g :: D (F t) -> () > g x = f x > > main = return () > > > The problem seems to be the call from "g" to "f". We're calling "f" with > an argument of type "D (F t)". "f" then has to determine what "s" is in > it's signature. We know: > > 1. "F s ~ D (F t)" (from function call) > 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type > definition) > > Therefore we should be able to derive: > > 3. "F s ~ F (D t)" (type equality is transitive) > 4. "s ~ D t" (as F is injective) > > I suspect the part we're missing in GHC is step 4. I recall reading this > somewhere but I can't find where now. > > However, the paper about injective types says that this style of > inference, namely "F a ~ F b => a ~ b" should occur. I quote ( > https://www.microsoft.com/en-us/research/wp-content/ > uploads/2016/07/injective-type-families-acm.pdf section 5.1 p125): > > So, faced with the constraint F α ∼ F β, the inference engine does not in >> general unify α := β; so the constraint F α ∼ F β is not solved, and hence >> f (g 3) will be rejected. But if we knew that F was injective, we can unify >> α := β without guessing. > > > >> Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process >> that adds extra "derived" equality constraints that may make some extra >> unifications apparent, thus allowing inference to proceed further without >> having to make guesses. In the case of an injective F, improvement adds α ∼ >> β, which the constraint solver can solve by unification. In general, >> improvement of wanted constraint is extremely simple: > > > >> Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F τ >> , add the derived wanted constraint σn ∼ τn for each n-injective argument >> of F. > > > >> Why is this OK? Because if it is possible to prove the original >> constraint F σ ∼ F τ , then (by Definition 1) we will also have a proof of >> σn ∼ τn. So adding σn ∼ τn as a new wanted constraint does not constrain >> the solution space. Why is it beneficial? Because, as we have seen, it may >> expose additional guess-free unification opportunities that that solver can >> exploit. > > > Am I correct in my assessment of what is happening here with GHC? Is there > anyway to get it to compile this program, perhaps with an extension? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clintonmead at gmail.com Sat Oct 29 02:30:59 2016 From: clintonmead at gmail.com (Clinton Mead) Date: Sat, 29 Oct 2016 13:30:59 +1100 Subject: [Haskell-cafe] Why doesn't GHC derive these types In-Reply-To: References: Message-ID: Sorry for talking to myself, but I think the answer to my question is here: https://ghc.haskell.org/trac/ghc/ticket/10833 On Sat, Oct 29, 2016 at 12:26 PM, Clinton Mead wrote: > Also is there a type-checker plugin that helps with this? If not, would it > be possible to write one, or was there some intentional reason why this > inference was not included? > > On Sat, Oct 29, 2016 at 11:54 AM, Clinton Mead > wrote: > >> Consider the following program: >> >> {-# LANGUAGE TypeFamilyDependencies #-} >> >> data D x >> >> type family F t = s | s -> t >> type instance F (D t) = D (F t) >> >> f :: F s -> () >> f _ = () >> >> g :: D (F t) -> () >> g x = f x >> >> main = return () >> >> >> The problem seems to be the call from "g" to "f". We're calling "f" with >> an argument of type "D (F t)". "f" then has to determine what "s" is in >> it's signature. We know: >> >> 1. "F s ~ D (F t)" (from function call) >> 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type >> definition) >> >> Therefore we should be able to derive: >> >> 3. "F s ~ F (D t)" (type equality is transitive) >> 4. "s ~ D t" (as F is injective) >> >> I suspect the part we're missing in GHC is step 4. I recall reading this >> somewhere but I can't find where now. >> >> However, the paper about injective types says that this style of >> inference, namely "F a ~ F b => a ~ b" should occur. I quote ( >> https://www.microsoft.com/en-us/research/wp-content/uploads >> /2016/07/injective-type-families-acm.pdf section 5.1 p125): >> >> So, faced with the constraint F α ∼ F β, the inference engine does not in >>> general unify α := β; so the constraint F α ∼ F β is not solved, and hence >>> f (g 3) will be rejected. But if we knew that F was injective, we can unify >>> α := β without guessing. >> >> >> >>> Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process >>> that adds extra "derived" equality constraints that may make some extra >>> unifications apparent, thus allowing inference to proceed further without >>> having to make guesses. In the case of an injective F, improvement adds α ∼ >>> β, which the constraint solver can solve by unification. In general, >>> improvement of wanted constraint is extremely simple: >> >> >> >>> Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F >>> τ , add the derived wanted constraint σn ∼ τn for each n-injective argument >>> of F. >> >> >> >>> Why is this OK? Because if it is possible to prove the original >>> constraint F σ ∼ F τ , then (by Definition 1) we will also have a proof of >>> σn ∼ τn. So adding σn ∼ τn as a new wanted constraint does not constrain >>> the solution space. Why is it beneficial? Because, as we have seen, it may >>> expose additional guess-free unification opportunities that that solver can >>> exploit. >> >> >> Am I correct in my assessment of what is happening here with GHC? Is >> there anyway to get it to compile this program, perhaps with an extension? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carette at mcmaster.ca Sat Oct 29 13:31:54 2016 From: carette at mcmaster.ca (Jacques Carette) Date: Sat, 29 Oct 2016 09:31:54 -0400 Subject: [Haskell-cafe] What instances can be derived by GHC ? Message-ID: <0388d068-1a60-abb4-51b5-441df33d9f35@mcmaster.ca> I am trying to find a list of classes which can be derived 'out of the box' with GHC 8.0, i.e. without installing extra packages like derive [1]. There is quite a lot of information on how 'deriving' (inline and standalone) works [2] on the GHC wiki, but sadly no list. Yes, it does say that any class could be made derivable, but that's not useful, as it doesn't which ones have been. The page describing DerivingStrategies does have a list of 'stock classes' - is this indeed the full list? From the 8.0.1 documentation on Deriving [4], one can extract a list too. For reference, as far as I can tell, the answer to my question *appears to be* (using extensions as necessary): - Bounded, Enum, Eq, Ix, Ord, Read, Show, Functor, Foldable, Traversable, Generic, Generic1, Data, Lift. I do understand that GeneralizedNewtypeDeriving muddies the water. Let's ignore 'newtype' for this purpose, and only concentrate on 'data'. If my guess is correct, would it make sense to put this information somewhere easy to find, instead of having be buried, so that it needs to be 'dug out' ? Jacques [1] http://hackage.haskell.org/package/derive [2] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/GenericDeriving [3] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/DerivingStrategies [4] http://downloads.haskell.org/~ghc/8.0.1/docs/html/users_guide/glasgow_exts.html#extensions-to-the-deriving-mechanism From rae at cs.brynmawr.edu Sat Oct 29 21:36:25 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sat, 29 Oct 2016 17:36:25 -0400 Subject: [Haskell-cafe] What instances can be derived by GHC ? In-Reply-To: <0388d068-1a60-abb4-51b5-441df33d9f35@mcmaster.ca> References: <0388d068-1a60-abb4-51b5-441df33d9f35@mcmaster.ca> Message-ID: <315C1922-0671-4776-B067-B95CF6F9444A@cs.brynmawr.edu> You've got the complete list, judged by the GHC source code. (There is also Typeable, which can be deprecatedly listed in a `deriving` clause.) I agree that this should be more discoverable. Care to file a documentation ticket so that we put this in the manual? Thanks! Richard > On Oct 29, 2016, at 9:31 AM, Jacques Carette wrote: > > I am trying to find a list of classes which can be derived 'out of the box' with GHC 8.0, i.e. without installing extra packages like derive [1]. There is quite a lot of information on how 'deriving' (inline and standalone) works [2] on the GHC wiki, but sadly no list. Yes, it does say that any class could be made derivable, but that's not useful, as it doesn't which ones have been. The page describing DerivingStrategies does have a list of 'stock classes' - is this indeed the full list? From the 8.0.1 documentation on Deriving [4], one can extract a list too. > > For reference, as far as I can tell, the answer to my question *appears to be* (using extensions as necessary): > - Bounded, Enum, Eq, Ix, Ord, Read, Show, Functor, Foldable, Traversable, Generic, Generic1, Data, Lift. > > I do understand that GeneralizedNewtypeDeriving muddies the water. Let's ignore 'newtype' for this purpose, and only concentrate on 'data'. > > If my guess is correct, would it make sense to put this information somewhere easy to find, instead of having be buried, so that it needs to be 'dug out' ? > > Jacques > > [1] http://hackage.haskell.org/package/derive > [2] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/GenericDeriving > [3] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/DerivingStrategies > [4] http://downloads.haskell.org/~ghc/8.0.1/docs/html/users_guide/glasgow_exts.html#extensions-to-the-deriving-mechanism > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From rae at cs.brynmawr.edu Sat Oct 29 21:58:43 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sat, 29 Oct 2016 17:58:43 -0400 Subject: [Haskell-cafe] Why doesn't GHC derive these types In-Reply-To: References: Message-ID: <119897B9-0EAD-41E7-AEF3-BEFAFB09132B@cs.brynmawr.edu> Your original example looks like a new bug to me. #10833 is about givens (that is, a context that would be specified in a function type signature), and I don't think that's at issue here. I agree with you that your program should be accepted. Might you post a ticket? Thanks, Richard > On Oct 28, 2016, at 10:30 PM, Clinton Mead wrote: > > Sorry for talking to myself, but I think the answer to my question is here: > > https://ghc.haskell.org/trac/ghc/ticket/10833 > > On Sat, Oct 29, 2016 at 12:26 PM, Clinton Mead > wrote: > Also is there a type-checker plugin that helps with this? If not, would it be possible to write one, or was there some intentional reason why this inference was not included? > > On Sat, Oct 29, 2016 at 11:54 AM, Clinton Mead > wrote: > Consider the following program: > > {-# LANGUAGE TypeFamilyDependencies #-} > > data D x > > type family F t = s | s -> t > type instance F (D t) = D (F t) > > f :: F s -> () > f _ = () > > g :: D (F t) -> () > g x = f x > > main = return () > > The problem seems to be the call from "g" to "f". We're calling "f" with an argument of type "D (F t)". "f" then has to determine what "s" is in it's signature. We know: > > 1. "F s ~ D (F t)" (from function call) > 2. "D (F t) ~ F (D t)" (from the right hand side of the injective type definition) > > Therefore we should be able to derive: > > 3. "F s ~ F (D t)" (type equality is transitive) > 4. "s ~ D t" (as F is injective) > > I suspect the part we're missing in GHC is step 4. I recall reading this somewhere but I can't find where now. > > However, the paper about injective types says that this style of inference, namely "F a ~ F b => a ~ b" should occur. I quote (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/injective-type-families-acm.pdf section 5.1 p125): > > So, faced with the constraint F α ∼ F β, the inference engine does not in general unify α := β; so the constraint F α ∼ F β is not solved, and hence f (g 3) will be rejected. But if we knew that F was injective, we can unify α := β without guessing. > > Improvement (a term due to Mark Jones (Jones 1995, 2000)) is a process that adds extra "derived" equality constraints that may make some extra unifications apparent, thus allowing inference to proceed further without having to make guesses. In the case of an injective F, improvement adds α ∼ β, which the constraint solver can solve by unification. In general, improvement of wanted constraint is extremely simple: > > Definition 11 (Wanted improvement). Given the wanted constraint F σ ∼ F τ , add the derived wanted constraint σn ∼ τn for each n-injective argument of F. > > Why is this OK? Because if it is possible to prove the original constraint F σ ∼ F τ , then (by Definition 1) we will also have a proof of σn ∼ τn. So adding σn ∼ τn as a new wanted constraint does not constrain the solution space. Why is it beneficial? Because, as we have seen, it may expose additional guess-free unification opportunities that that solver can exploit. > > Am I correct in my assessment of what is happening here with GHC? Is there anyway to get it to compile this program, perhaps with an extension? > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ch.howard at zoho.com Sun Oct 30 05:47:16 2016 From: ch.howard at zoho.com (Christopher Howard) Date: Sat, 29 Oct 2016 21:47:16 -0800 Subject: [Haskell-cafe] Escape Time Library or Free Software Program? Message-ID: <5582433e-25ac-39f1-5ec9-d6b3cd288048@zoho.com> Hi list. Is there a library that separates out escape time image generation, so that you could plug in whatever formula you wanted for Xnew and Ynew? (Instead of just a few predefined complex number formulas like julia?) I've come across a ton of libraries and programs that generate mandelbrots, but I wanted to try autogenerating images based on random polynomials, like Pickover describes in one of his papers. But I know escape time image generation must be far too well developed by now to be building this from scratch. -- https://qlfiles.net My PGP public key ID is 0x340EA95A (pgp.mit.edu). From zkessin at gmail.com Sun Oct 30 08:59:11 2016 From: zkessin at gmail.com (Zachary Kessin) Date: Sun, 30 Oct 2016 10:59:11 +0200 Subject: [Haskell-cafe] GHCJS Best practices Message-ID: We are thinking about building our frontend with GHCJS and I am trying to figure out what is the best way to do is? Problem is that when I googled variations of "How to architect a GHCJS app " etc I really din't get anything so a few questions 1) Is anyone building mid-large sized apps in GHCJS 2) How do you architect them 3) what libraries are there Zach -- Zach Kessin SquareTarget Twitter: @zkessin Skype: zachkessin ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zkessin at gmail.com Sun Oct 30 09:01:18 2016 From: zkessin at gmail.com (Zachary Kessin) Date: Sun, 30 Oct 2016 11:01:18 +0200 Subject: [Haskell-cafe] GHCJS Bad first impression Message-ID: I am starting a project that might use ghcjs and it seems to me that if someone's first impression of it is tying stack install and having it take several hours to build everything that is not a very welcoming way to get people to use ghcjs. Why not precompiled binaries? Also while It has been compiling i have seen a lot of warnings? Zach -- Zach Kessin SquareTarget Twitter: @zkessin Skype: zachkessin ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Sun Oct 30 16:44:52 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Sun, 30 Oct 2016 16:44:52 +0000 Subject: [Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29 In-Reply-To: <36384d09-9d23-265c-5f4e-e379cb6a0bb7@cs.otago.ac.nz> References: <36384d09-9d23-265c-5f4e-e379cb6a0bb7@cs.otago.ac.nz> Message-ID: thanks for the reply. Conceptually I like the idea of a single address space, it can then be a matter of configuration as to whether what you're addressing is another local process, processor or something more remote. Some assumptions about what can be expected from local resources need to be dropped but I believe that it works in other situations. Your point about not wanting to have to rewrite when the underlying platform evolves seems relevant. Perhaps that suggests that a language, while needing to be aware of its environment, oughtn't to shape itself entirely for that environment. While we're on the subject of rewrites, that is the fate of the WIP. I was wrong. On 28 October 2016 at 01:38, Richard A. O'Keefe wrote: > > > On 28/10/16 8:41 AM, Rik Howard wrote: > >> Any novelty in the note would only ever be in the way that the mix is >> provided. You raise salient points about the sort of challenges that >> languages will need to confront although a search has left me still >> unsure about PGPUs. Can I ask you to say a bit more about programming >> styles: what Java can't do, what others can do, how that scales? >> > > The fundamental issue is that Java is very much an imperative language > (although books on concurrent programming in Java tend to strongly > recommending immutable data structures whenever practical, because they > are safer to share). > > The basic computational model of (even concurrent) imperative languages > is the RAM: there is a set of threads living in a single address space > where all memory is equally and as easily accessible to all threads. > > Already that's not true. One of the machines sitting on my desk is a > Parallela: 2 ARM cores, 16 RISC cores, there's a single address space > shared by the RISC cores but each of them "owns" a chunk of it and > access is not uniform. Getting information between the ARM cores and > the RISC cores is not trivial. Indeed, one programming model for the > Parallela is OpenCL 1.1, although as they note, > "Creating an API for architectures not even considered during the creation > of a standard is challenging. This can be seen in the case of Epiphany, > which possesses an architecture very different from a GPU, and which > supports functionality not yet supported by a GPU. OpenCL as an API for > Epiphany is good, but not perfect." The thing is that the > Epiphany chip is more *like* a GPU than it is like anything say Java > might want to run on. > > For that matter, there is the IBM "Cell" processor, basically a Power > core and a bunch of RISCish cores, not entirely unlike the Epiphany. > As the Wikipedia page on the Cell notes, "Cell is widely regarded as a > challenging environment for software development". > > Again, Java wants a (1) large (2) flat (3) shared address space, and > that's *not* what Cell delivers. The memory space available to each > "SPE" in a Cell is effectively what would have been L1 cache on a more > conventional machine, and transfers between that and main memory are > non-trivial. So Cell memory is (1) small (2) heterogeneous and (3) > partitioned. > > The Science Data Processor for the Square Kilometre Array is still > being designed. As far as I know, they haven't committed to a CPU > architecture yet, and they probably want to leave that pretty late. > Cell might be a candidate, but I suspect they'll not want to spend > much of their software development budget on a "challenging" > architecture. > > Hmm. Scaling. > > Here's the issue. It looks as though the future of scaling is > *lots* of processors, running *slower* than typical desktops, > with things turned down or off as much as possible, so you won't > be able to pull the Parallela/Epiphany trick of always being able > to access another chip's local memory. Any programming model > that relies on large flat shared address spaces is out; message > passing that copies stuff is going to be much easier to manage > than passing a pointer to memory that might be powered off when > you need it; anything that creates tight coupling between the > execution orders of separate processors is going to be a nightmare. > > We're also looking at more things moving into special-purpose > hardware, in order to reduce power costs. It would be nice to be > able to do this without a complete rewrite... > > Coarray Fortran (in the current standard) is an attempt to deal with > the kinds of machines I'm talking about. Whether it's a good attempt > I couldn't say, I'm still trying to get my head around it. (More > precisely, I think I understand what it's about, but I haven't a > clue about how to *use* the feature effectively.) There are people > at Rice who think it could be better. > > Reverting to the subject of declarative/procedural, I recently came > across Lee Naish's "Pawns" language. Still very much a prototype, > and he is interested in the semantics, not the syntax. > https://github.com/lee-naish/Pawns > http://people.eng.unimelb.edu.au/lee/papers/pawns/ > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at joyful.com Sun Oct 30 17:05:00 2016 From: simon at joyful.com (Simon Michael) Date: Sun, 30 Oct 2016 10:05:00 -0700 Subject: [Haskell-cafe] [ANN] hledger 1.0 Message-ID: <1FC6D40F-3D81-4BE1-B38A-5BC1A8CA597C@joyful.com> Attention, attention hledger-folk! Once again, Happy Hallowe'en. After almost ten years of steady development, and one year since our last major release, I am very pleased to announce.. _ _ _ _ ___ _ | |__ | | ___ __| | __ _ ___ _ __ / | / _ \ | | | '_ \| |/ _ \/ _` |/ _` |/ _ \ '__| | || | | | | | | | | | | __/ (_| | (_| | __/ | | || |_| | |_| |_| |_|_|\___|\__,_|\__, |\___|_| |_(_)___/ (_) |___/ hledger's 1.0 release! It's about time! hledger (http://hledger.org) is a cross-platform program for tracking money, time, or any other commodity using double-entry accounting and a simple plain text file format. Inspired by Ledger CLI, hledger provides command-line, curses and web interfaces, and aims to be a reliable, practical tool for daily use. Notable changes since 0.27: - the hledger.org website is simpler, clearer, and more mobile-friendly - docs have been reorganized, with more focussed manuals available in multiple versions, formats and as built-in help - we support the latest GHC (8 and 7.10), stackage snapshots, and libs. (GHC 7.8 and 7.6 support are not currently supported, maintainers welcome.) - hledger has migrated from parsec to megaparsec and from String to Text, parsers have been simplified, memory usage is ~30% less on large files, speed is slightly improved all around - --pivot (group by arbitrary tag instead of account) and --anon (obfuscate account names) are now supported - hledger-ui has acquired many new features making it more useful (file editing, filtering, historical/period modes, quick period browsing..) - hledger-web is more robust and more mobile-friendly - hledger-api, a simple web API server, has been added - a new "timedot" file format allows retroactive/approximate time logging - the project continues to grow. A call for help was sent out last month, and contributor activity is increasing - a new website, http://plaintextaccounting.org, has been created as a portal and knowledge base for hledger, Ledger, beancount and related tools and practices. Full release notes: http://hledger.org/release-notes#hledger-1.0 How to install: (Get stack, eg from http://haskell-lang.org/get-started) $ stack install --resolver=nightly hledger [hledger-ui] [hledger-web] [hledger-api] $ ~/.local/bin/hledger --version or see http://hledger.org/download for more install options, including cabal, OS packages and Windows binaries. Contributors to this release: Simon Michael, Dominik Süß, Thomas R. Koll, Moritz Kiefer, jungle-boogie, Sergei Trofimovich, Malte Brandy, Sam Doshi, Mitchell Rosen, Hans-Peter Deifel, Brian Scott, and Andrew Jones. How to get and give help: I hope you enjoy these tools and that they help you achieve your goals. The hledger project is by now too large for one person to do it justice, so it's great to see our contributor community growing. If you like hledger, your support and participation is welcome! Our IRC channel is #hledger on Freenode, and you can find out more at http://hledger.org . Best, -Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From rik at dcs.bbk.ac.uk Sun Oct 30 17:25:50 2016 From: rik at dcs.bbk.ac.uk (Rik Howard) Date: Sun, 30 Oct 2016 17:25:50 +0000 Subject: [Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29 In-Reply-To: References: <36384d09-9d23-265c-5f4e-e379cb6a0bb7@cs.otago.ac.nz> Message-ID: All thank you for the feedback and the bandwidth. It has been invaluable and is appreciated. Regards Rik On 30 October 2016 at 16:44, Rik Howard wrote: > thanks for the reply. Conceptually I like the idea of a single address > space, it can then be a matter of configuration as to whether what you're > addressing is another local process, processor or something more remote. > Some assumptions about what can be expected from local resources need to be > dropped but I believe that it works in other situations. Your point about > not wanting to have to rewrite when the underlying platform evolves seems > relevant. Perhaps that suggests that a language, while needing to be aware > of its environment, oughtn't to shape itself entirely for that > environment. While we're on the subject of rewrites, that is the fate of > the WIP. I was wrong. > > > On 28 October 2016 at 01:38, Richard A. O'Keefe wrote: > >> >> >> On 28/10/16 8:41 AM, Rik Howard wrote: >> >>> Any novelty in the note would only ever be in the way that the mix is >>> provided. You raise salient points about the sort of challenges that >>> languages will need to confront although a search has left me still >>> unsure about PGPUs. Can I ask you to say a bit more about programming >>> styles: what Java can't do, what others can do, how that scales? >>> >> >> The fundamental issue is that Java is very much an imperative language >> (although books on concurrent programming in Java tend to strongly >> recommending immutable data structures whenever practical, because they >> are safer to share). >> >> The basic computational model of (even concurrent) imperative languages >> is the RAM: there is a set of threads living in a single address space >> where all memory is equally and as easily accessible to all threads. >> >> Already that's not true. One of the machines sitting on my desk is a >> Parallela: 2 ARM cores, 16 RISC cores, there's a single address space >> shared by the RISC cores but each of them "owns" a chunk of it and >> access is not uniform. Getting information between the ARM cores and >> the RISC cores is not trivial. Indeed, one programming model for the >> Parallela is OpenCL 1.1, although as they note, >> "Creating an API for architectures not even considered during the >> creation of a standard is challenging. This can be seen in the case of >> Epiphany, which possesses an architecture very different from a GPU, and >> which supports functionality not yet supported by a GPU. OpenCL as an API >> for Epiphany is good, but not perfect." The thing is that the >> Epiphany chip is more *like* a GPU than it is like anything say Java >> might want to run on. >> >> For that matter, there is the IBM "Cell" processor, basically a Power >> core and a bunch of RISCish cores, not entirely unlike the Epiphany. >> As the Wikipedia page on the Cell notes, "Cell is widely regarded as a >> challenging environment for software development". >> >> Again, Java wants a (1) large (2) flat (3) shared address space, and >> that's *not* what Cell delivers. The memory space available to each >> "SPE" in a Cell is effectively what would have been L1 cache on a more >> conventional machine, and transfers between that and main memory are >> non-trivial. So Cell memory is (1) small (2) heterogeneous and (3) >> partitioned. >> >> The Science Data Processor for the Square Kilometre Array is still >> being designed. As far as I know, they haven't committed to a CPU >> architecture yet, and they probably want to leave that pretty late. >> Cell might be a candidate, but I suspect they'll not want to spend >> much of their software development budget on a "challenging" >> architecture. >> >> Hmm. Scaling. >> >> Here's the issue. It looks as though the future of scaling is >> *lots* of processors, running *slower* than typical desktops, >> with things turned down or off as much as possible, so you won't >> be able to pull the Parallela/Epiphany trick of always being able >> to access another chip's local memory. Any programming model >> that relies on large flat shared address spaces is out; message >> passing that copies stuff is going to be much easier to manage >> than passing a pointer to memory that might be powered off when >> you need it; anything that creates tight coupling between the >> execution orders of separate processors is going to be a nightmare. >> >> We're also looking at more things moving into special-purpose >> hardware, in order to reduce power costs. It would be nice to be >> able to do this without a complete rewrite... >> >> Coarray Fortran (in the current standard) is an attempt to deal with >> the kinds of machines I'm talking about. Whether it's a good attempt >> I couldn't say, I'm still trying to get my head around it. (More >> precisely, I think I understand what it's about, but I haven't a >> clue about how to *use* the feature effectively.) There are people >> at Rice who think it could be better. >> >> Reverting to the subject of declarative/procedural, I recently came >> across Lee Naish's "Pawns" language. Still very much a prototype, >> and he is interested in the semantics, not the syntax. >> https://github.com/lee-naish/Pawns >> http://people.eng.unimelb.edu.au/lee/papers/pawns/ >> >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djohnson.m at gmail.com Sun Oct 30 18:42:56 2016 From: djohnson.m at gmail.com (David Johnson) Date: Sun, 30 Oct 2016 13:42:56 -0500 Subject: [Haskell-cafe] GHCJS Bad first impression In-Reply-To: References: Message-ID: Zachary, I highly recommend using the nix package manager for GHCJS projects. Nix is very easy to install and uninstall (rm -rf /nix). To install: curl https://nixos.org/nix/install | sh To be put into a shell where ghcjs exists along with a few dependencies, run, nix-shell -p "haskell.packages.ghcjs.ghcWithPackages (pkgs: with pkgs; [ghcjs-base ghcjs-dom])" >From here the only real framework that exists is reflex-dom. There is actually the reflex-platform project that uses the nix package manager ( https://github.com/reflex-frp/reflex-platform) - David On Sun, Oct 30, 2016 at 4:01 AM, Zachary Kessin wrote: > I am starting a project that might use ghcjs and it seems to me that if > someone's first impression of it is tying stack install and having it take > several hours to build everything that is not a very welcoming way to get > people to use ghcjs. Why not precompiled binaries? > > Also while It has been compiling i have seen a lot of warnings? > > Zach > > -- > Zach Kessin > SquareTarget > Twitter: @zkessin > Skype: zachkessin > ᐧ > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -- Cell: 1.630.740.8204 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wuzzeb at gmail.com Mon Oct 31 01:31:11 2016 From: wuzzeb at gmail.com (John Lenz) Date: Sun, 30 Oct 2016 20:31:11 -0500 Subject: [Haskell-cafe] GHCJS Best practices In-Reply-To: References: Message-ID: I recently wrote up some of my thoughts using react-flux with GHCJS here: http://blog.wuzzeb.org/full-stack-web-haskell/client.html On Sun, Oct 30, 2016 at 3:59 AM, Zachary Kessin wrote: > We are thinking about building our frontend with GHCJS and I am trying to > figure out what is the best way to do is? Problem is that when I googled > variations of "How to architect a GHCJS app " etc I really din't get > anything > > so a few questions > 1) Is anyone building mid-large sized apps in GHCJS > 2) How do you architect them > 3) what libraries are there > > Zach > > -- > Zach Kessin > SquareTarget > Twitter: @zkessin > Skype: zachkessin > ᐧ > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at cs.otago.ac.nz Mon Oct 31 04:07:15 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Mon, 31 Oct 2016 17:07:15 +1300 Subject: [Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29 In-Reply-To: References: <36384d09-9d23-265c-5f4e-e379cb6a0bb7@cs.otago.ac.nz> Message-ID: <706f4620-d604-f87e-dffc-b939317b76cd@cs.otago.ac.nz> On 31/10/16 5:44 AM, Rik Howard wrote: > thanks for the reply. Conceptually I like the idea of a single address > space, it can then be a matter of configuration as to whether what > you're addressing is another local process, processor or something more > remote. The world doesn't care what you or I happen to like. I completely agree in *liking* a single address space. But it's not true to what is *there*, and if you program for that model, you're going to get terrible performance. I've just been attending a 1-day introduction to our national HPC system. There are two clusters. One has about 3,300 cores and the other over 6,000. One is POWER+AIX, the other Intel+Linux. One has Xeon Phis (amongst other things), the other does not. Hint: neither of them has a single address space, and while we know about software distributed memory (indeed, one of the people here has published innovative research in that area), it is *not* like a single address space and is not notably easy to use. It's possible to "tame" single address space. When you start to learn Ada, you *think* you're dealing with a single address space language, until you learn about partitioning programs for distributed execution. For that matter, Occam has the same property (which is one of the reasons why Occam didn't have pointers, so that it would be *logically* a matter of indifference whether two concurrent processors were on the same chip or not). But even when communication is disguised as memory accessing, it's still communication, it still *costs* like communication, and if you want high performance, you had better *measure* it as communication. One of the presenters was working with a million lines of Fortran, almost all of it written by other people. How do we make that safe? From iricanaycan at gmail.com Mon Oct 31 11:59:16 2016 From: iricanaycan at gmail.com (=?utf-8?Q?Aycan_=C4=B0rican?=) Date: Mon, 31 Oct 2016 14:59:16 +0300 Subject: [Haskell-cafe] GHCJS Bad first impression In-Reply-To: References: Message-ID: <612AF2B8-4175-4244-95C7-15D096BC7628@gmail.com> > On 30 Oct 2016, at 21:42, David Johnson wrote: > > Zachary, > > I highly recommend using the nix package manager for GHCJS projects. > > Nix is very easy to install and uninstall (rm -rf /nix). > > To install: curl https://nixos.org/nix/install > | sh > > > To be put into a shell where ghcjs exists along with a few dependencies, run, > nix-shell -p "haskell.packages.ghcjs.ghcWithPackages (pkgs: with pkgs; [ghcjs-base ghcjs-dom])” > Unfortunately it is failing with: ``` building path(s) ‘/nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7’ setupCompilerEnvironmentPhase Build with /nix/store/nxnnsdk5njk6913z017h0hf6v5f3waah-ghcjs-0.2.0. unpacking sources unpacking source archive /nix/store/86ny46814x0qw0srcgywwgzsz1myhpnn-entropy-0.3.7.tar.gz source root is entropy-0.3.7 setting SOURCE_DATE_EPOCH to timestamp 1434401149 of file entropy-0.3.7/System/EntropyXen.hs patching sources compileBuildDriverPhase setupCompileFlags: -package-db=/tmp/nix-build-entropy-0.3.7.drv-2/package.conf.d -j1 [1 of 1] Compiling Main ( Setup.hs, /tmp/nix-build-entropy-0.3.7.drv-2/Main.o ) Linking Setup ... configuring configureFlags: --verbose --prefix=/nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7 --libdir=$prefix/lib/$compiler --libsubdir=$pkgid --with-gcc=gcc --package-db=/tmp/nix-build-entropy-0.3.7.drv-2/package.conf.d --ghc-option=-opt l=-Wl,-rpath=/nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7/lib/ghcjs-0.2.0/entropy-0.3.7 --enable-split-objs --disable-library-profiling --disable-executable-profiling --enable-shared --enable-library-vanilla --enable-executab le-dynamic --disable-tests --with-hsc2hs=/nix/store/mcfx2csfnjsqk83h64jw0w1a4df48rwp-ghc-7.10.3/bin/hsc2hs --ghcjs Configuring entropy-0.3.7... Flags chosen: halvm=False Dependency base >=4.3 && <5: using base-4.8.0.0 Dependency bytestring -any: using bytestring-0.10.6.0 Dependency unix -any: using unix-2.7.1.0 Using Cabal-1.22.5.0 compiled by ghc-7.10 Using compiler: ghcjs-0.2.0 Using install prefix: /nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7 Binaries installed in: /nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7/bin Libraries installed in: /nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7/lib/ghcjs-0.2.0/entropy-0.3.7 Private binaries installed in: /nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7/libexec Data files installed in: /nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7/share/x86_64-linux-ghcjs-0.2.0-ghc7_10_3/entropy-0.3.7 Documentation installed in: /nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7/share/doc/x86_64-linux-ghcjs-0.2.0-ghc7_10_3/entropy-0.3.7 Configuration files installed in: /nix/store/bcdkyjdrvx1jmqzkh031h1zz0x3sd4q6-entropy-0.3.7/etc No alex found Using ar found on system at: /nix/store/d61gfhj50bfrrlvp4jzdxmsap3izsvyc-binutils-2.27/bin/ar No c2hs found No cpphs found Using gcc version 5.4.0 given by user at: /nix/store/wckiwf1m333akbm3d7pyrj57g3i39367-gcc-wrapper-5.4.0/bin/gcc No ghc found No ghc-pkg found Using ghcjs version 0.2.0 found on system at: /nix/store/nxnnsdk5njk6913z017h0hf6v5f3waah-ghcjs-0.2.0/bin/ghcjs Using ghcjs-pkg version 7.10.3 found on system at: /nix/store/nxnnsdk5njk6913z017h0hf6v5f3waah-ghcjs-0.2.0/bin/ghcjs-pkg No greencard found Using haddock version 2.16.1 found on system at: /nix/store/nxnnsdk5njk6913z017h0hf6v5f3waah-ghcjs-0.2.0/bin/haddock-ghcjs No happy found Using haskell-suite found on system at: haskell-suite-dummy-location Using haskell-suite-pkg found on system at: haskell-suite-pkg-dummy-location No hmake found No hpc found Using hsc2hs version 0.67 given by user at: /nix/store/mcfx2csfnjsqk83h64jw0w1a4df48rwp-ghc-7.10.3/bin/hsc2hs No hscolour found No jhc found Using ld found on system at: /nix/store/wckiwf1m333akbm3d7pyrj57g3i39367-gcc-wrapper-5.4.0/bin/ld No lhc found No lhc-pkg found No pkg-config found Using strip version 2.27 found on system at: /nix/store/d61gfhj50bfrrlvp4jzdxmsap3izsvyc-binutils-2.27/bin/strip Using tar found on system at: /nix/store/p6jsf52izfpgb758xvdcw19byj644iak-gnutar-1.29/bin/tar No uhc found building Setup: Could not determine C compiler builder for ‘/nix/store/8ig44jskrggpnsqmiwjr3y7kz8aib9jj-entropy-0.3.7.drv’ failed with exit code 1 cannot build derivation ‘/nix/store/658g4wwinrpiy4qc23r9rzf0abab6q13-ghcjs-0.2.0.drv’: 1 dependencies couldn't be built error: build of ‘/nix/store/658g4wwinrpiy4qc23r9rzf0abab6q13-ghcjs-0.2.0.drv’ failed /run/current-system/sw/bin/nix-shell: failed to build all dependencies ``` I also tried it with 16.09 which also failed. — aycan From alan.zimm at gmail.com Mon Oct 31 19:39:27 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 31 Oct 2016 21:39:27 +0200 Subject: [Haskell-cafe] PSA: gcc-6.2.0 breaks linking in ghc 7.10.3/8.0.1 Message-ID: On my Debian testing machine, after gcc updated to gcc (Debian 6.2.0-9) 6.2.0 20161019 I started getting linker errors complaining about -fPIC Herbert Valerio Riedel pointed out on #hackage that the settings file has to be updated to cope with this. So, the files /opt/ghc/7.10.3/lib/ghc-7.10.3/settings /opt/ghc/8.0.1/lib/ghc-8.0.1/settings need to be updated to have the following values set ("C compiler flags", "-fno-PIE -fno-stack-protector"), ("C compiler link flags", "-no-pie"), ("ld flags", "-no-pie"), The precise location of these files may differ on your machine. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From slyich at gmail.com Mon Oct 31 20:09:10 2016 From: slyich at gmail.com (Sergei Trofimovich) Date: Mon, 31 Oct 2016 20:09:10 +0000 Subject: [Haskell-cafe] PSA: gcc-6.2.0 breaks linking in ghc 7.10.3/8.0.1 In-Reply-To: References: Message-ID: <20161031200910.25baf4c5@sf> On Mon, 31 Oct 2016 21:39:27 +0200 "Alan & Kim Zimmerman" wrote: > On my Debian testing machine, after gcc updated to > > gcc (Debian 6.2.0-9) 6.2.0 20161019 > > I started getting linker errors complaining about -fPIC > > Herbert Valerio Riedel pointed out on #hackage that the settings file has > to be updated to cope with this. > > So, the files > > /opt/ghc/7.10.3/lib/ghc-7.10.3/settings > /opt/ghc/8.0.1/lib/ghc-8.0.1/settings > > need to be updated to have the following values set > > ("C compiler flags", "-fno-PIE -fno-stack-protector"), > ("C compiler link flags", "-no-pie"), > ("ld flags", "-no-pie"), > > The precise location of these files may differ on your machine. > > Alan The bug is a https://ghc.haskell.org/trac/ghc/ticket/11834 That should be not any gcc-6.2.0 but one built with --enable-default-pie -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: Цифровая подпись OpenPGP URL: From jo at durchholz.org Mon Oct 31 20:54:54 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Mon, 31 Oct 2016 21:54:54 +0100 Subject: [Haskell-cafe] Haskell-Cafe Digest, Vol 158, Issue 29 In-Reply-To: <706f4620-d604-f87e-dffc-b939317b76cd@cs.otago.ac.nz> References: <36384d09-9d23-265c-5f4e-e379cb6a0bb7@cs.otago.ac.nz> <706f4620-d604-f87e-dffc-b939317b76cd@cs.otago.ac.nz> Message-ID: <8f85862d-d14a-342b-b489-ad394afe92cb@durchholz.org> Am 31.10.2016 um 05:07 schrieb Richard A. O'Keefe: > But even when communication is disguised as memory accessing, > it's still communication, it still *costs* like communication, > and if you want high performance, you had better *measure* it > as communication. And you need to control memory coherence, i.e. you need to define what data goes together with what processes. In an ideal world, the compiler would be smart enough to do that for you. I have been reading fantasies that FPLs with their immutable data structures are better suited for this kind of automation; has compiler research progressed enough to make that a realistic option? Without that, you'd code explicit multithreading, which means that communication does not look like memory access at all.