From blamario at ciktel.net Sat Oct 1 01:09:38 2016 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Fri, 30 Sep 2016 21:09:38 -0400 Subject: Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net> On 2016-09-30 07:25 PM, David Feuer wrote: > > I've been playing around with the idea of writing Haskell 2010 type > classes for finite sequences and non-empty sequences, somewhat similar > to Michael Snoyman's Sequence class in mono-traversable. These are > naturally based on Monoid1 and Semigroup1, which I think belong in base. > If the proposal is to add these directly to base, I'm against it. New classes should first be released in a regular package, and only moved to base once they prove useful. > class Semigroup1 f where > (<<>>) :: f a -> f a -> f a > class Semigroup1 f => Monoid1 f where > mempty1 :: f a > > Then I can write > > class (Monoid1 t, Traversable t) => Sequence t where > singleton :: a -> t a > -- and other less-critical methods > > class (Semigroup1 t, Traversable1 t) => NESequence where > singleton1 :: a -> t a > -- etc. > From david.feuer at gmail.com Sat Oct 1 01:49:48 2016 From: david.feuer at gmail.com (David Feuer) Date: Fri, 30 Sep 2016 21:49:48 -0400 Subject: Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net> References: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net> Message-ID: It seems to me that Data.Functor.Classes is the natural place for these, but I guess I could stick them somewhere else. On Sep 30, 2016 9:09 PM, "Mario Blažević" wrote: > On 2016-09-30 07:25 PM, David Feuer wrote: > >> >> I've been playing around with the idea of writing Haskell 2010 type >> classes for finite sequences and non-empty sequences, somewhat similar >> to Michael Snoyman's Sequence class in mono-traversable. These are >> naturally based on Monoid1 and Semigroup1, which I think belong in base. >> >> > If the proposal is to add these directly to base, I'm against it. New > classes should first be released in a regular package, and only moved to > base once they prove useful. > > > class Semigroup1 f where >> (<<>>) :: f a -> f a -> f a >> class Semigroup1 f => Monoid1 f where >> mempty1 :: f a >> >> Then I can write >> >> class (Monoid1 t, Traversable t) => Sequence t where >> singleton :: a -> t a >> -- and other less-critical methods >> >> class (Semigroup1 t, Traversable1 t) => NESequence where >> singleton1 :: a -> t a >> -- etc. >> >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Sat Oct 1 08:07:51 2016 From: ekmett at gmail.com (Edward Kmett) Date: Sat, 1 Oct 2016 04:07:51 -0400 Subject: Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: I'm somewhat weakly against these, simply because they haven't seen broad adoption in the wild in any of the attempts to introduce them elsewhere, and they don't quite fit the naming convention of the other Foo1 classes in Data.Functor.Classes Eq1 f says more or less that Eq a => Eq (f a). Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is saying forall a. (f a) is a semigroup parametrically. Both of these constructions could be useful, but they ARE different constructions. If folks had actually been using, say, the Plus and Alt classes from semigroupoids or the like more or less at all pretty much anywhere, I could maybe argue towards bringing them up towards base, but I've seen almost zero adoption of the ideas over multiple years -- and these represent yet _another_ point in the design space where we talk about semigroupal and monoidal structures where f is a Functor instead. =/ Many points in the design space, and little demonstrated will for adoption seems to steers me to think that the community isn't ready to pick one and enshrine it some place central yet. Overall, -1. -Edward On Fri, Sep 30, 2016 at 7:25 PM, David Feuer wrote: > I've been playing around with the idea of writing Haskell 2010 type > classes for finite sequences and non-empty sequences, somewhat similar to > Michael Snoyman's Sequence class in mono-traversable. These are naturally > based on Monoid1 and Semigroup1, which I think belong in base. > > class Semigroup1 f where > (<<>>) :: f a -> f a -> f a > class Semigroup1 f => Monoid1 f where > mempty1 :: f a > > Then I can write > > class (Monoid1 t, Traversable t) => Sequence t where > singleton :: a -> t a > -- and other less-critical methods > > class (Semigroup1 t, Traversable1 t) => NESequence where > singleton1 :: a -> t a > -- etc. > > I can, of course, just write my own, but I don't think I'm the only one > using such. > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blamario at ciktel.net Sat Oct 1 15:08:06 2016 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Sat, 1 Oct 2016 11:08:06 -0400 Subject: Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: On 2016-10-01 04:07 AM, Edward Kmett wrote: > I'm somewhat weakly against these, simply because they haven't seen > broad adoption in the wild in any of the attempts to introduce them > elsewhere, and they don't quite fit the naming convention of the other > Foo1 classes in Data.Functor.Classes > > Eq1 f says more or less that Eq a => Eq (f a). > > Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is > saying forall a. (f a) is a semigroup parametrically. Both of these > constructions could be useful, but they ARE different constructions. The standard fully parametric classes like Functor and Monad have no suffix at all. It makes sense to reserve the suffix "1" for non-parametric lifting classes. Can you suggest a different naming scheme for parametric classes of a higher order? I'm also guilty of abusing the suffix "1", at least provisionally, but these are different beasts yet again: -- | Equivalent of 'Functor' for rank 2 data types class Functor1 g where fmap1 :: (forall a. p a -> q a) -> g p -> g q https://github.com/blamario/grampa/blob/master/Text/Grampa/Classes.hs What would be a proper suffix here? I guess Functor2 would make sense, for a rank-2 type? > > If folks had actually been using, say, the Plus and Alt classes from > semigroupoids or the like more or less at all pretty much anywhere, I > could maybe argue towards bringing them up towards base, but I've seen > almost zero adoption of the ideas over multiple years -- and these > represent yet _another_ point in the design space where we talk about > semigroupal and monoidal structures where f is a Functor instead. =/ > > Many points in the design space, and little demonstrated will for > adoption seems to steers me to think that the community isn't ready to > pick one and enshrine it some place central yet. > > Overall, -1. > > -Edward > > On Fri, Sep 30, 2016 at 7:25 PM, David Feuer > wrote: > > I've been playing around with the idea of writing Haskell 2010 > type classes for finite sequences and non-empty sequences, > somewhat similar to Michael Snoyman's Sequence class in > mono-traversable. These are naturally based on Monoid1 and > Semigroup1, which I think belong in base. > > class Semigroup1 f where > (<<>>) :: f a -> f a -> f a > class Semigroup1 f => Monoid1 f where > mempty1 :: f a > > Then I can write > > class (Monoid1 t, Traversable t) => Sequence t where > singleton :: a -> t a > -- and other less-critical methods > > class (Semigroup1 t, Traversable1 t) => NESequence where > singleton1 :: a -> t a > -- etc. > > I can, of course, just write my own, but I don't think I'm the > only one using such. > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From ekmett at gmail.com Sat Oct 1 21:26:12 2016 From: ekmett at gmail.com (Edward Kmett) Date: Sat, 1 Oct 2016 17:26:12 -0400 Subject: Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: References: Message-ID: Re 2 for rank-2, there is already precedent for using 2 for lifting over two arguments, so semantic confusion sadly remains: E.g. Eq2 p means Eq a, Eq b => Eq (p a b) or Eq2 p means Eq a => Eq1 (p a) -Edward On Sat, Oct 1, 2016 at 11:08 AM, Mario Blažević wrote: > On 2016-10-01 04:07 AM, Edward Kmett wrote: > >> I'm somewhat weakly against these, simply because they haven't seen broad >> adoption in the wild in any of the attempts to introduce them elsewhere, >> and they don't quite fit the naming convention of the other Foo1 classes in >> Data.Functor.Classes >> >> Eq1 f says more or less that Eq a => Eq (f a). >> >> Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is >> saying forall a. (f a) is a semigroup parametrically. Both of these >> constructions could be useful, but they ARE different constructions. >> > > The standard fully parametric classes like Functor and Monad have no > suffix at all. It makes sense to reserve the suffix "1" for non-parametric > lifting classes. Can you suggest a different naming scheme for parametric > classes of a higher order? > > I'm also guilty of abusing the suffix "1", at least provisionally, but > these are different beasts yet again: > > -- | Equivalent of 'Functor' for rank 2 data types > class Functor1 g where > fmap1 :: (forall a. p a -> q a) -> g p -> g q > > https://github.com/blamario/grampa/blob/master/Text/Grampa/Classes.hs > > What would be a proper suffix here? I guess Functor2 would make sense, > for a rank-2 type? > > > >> If folks had actually been using, say, the Plus and Alt classes from >> semigroupoids or the like more or less at all pretty much anywhere, I could >> maybe argue towards bringing them up towards base, but I've seen almost >> zero adoption of the ideas over multiple years -- and these represent yet >> _another_ point in the design space where we talk about semigroupal and >> monoidal structures where f is a Functor instead. =/ >> >> Many points in the design space, and little demonstrated will for >> adoption seems to steers me to think that the community isn't ready to pick >> one and enshrine it some place central yet. >> >> Overall, -1. >> >> -Edward >> >> On Fri, Sep 30, 2016 at 7:25 PM, David Feuer > > wrote: >> >> I've been playing around with the idea of writing Haskell 2010 >> type classes for finite sequences and non-empty sequences, >> somewhat similar to Michael Snoyman's Sequence class in >> mono-traversable. These are naturally based on Monoid1 and >> Semigroup1, which I think belong in base. >> >> class Semigroup1 f where >> (<<>>) :: f a -> f a -> f a >> class Semigroup1 f => Monoid1 f where >> mempty1 :: f a >> >> Then I can write >> >> class (Monoid1 t, Traversable t) => Sequence t where >> singleton :: a -> t a >> -- and other less-critical methods >> >> class (Semigroup1 t, Traversable1 t) => NESequence where >> singleton1 :: a -> t a >> -- etc. >> >> I can, of course, just write my own, but I don't think I'm the >> only one using such. >> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> >> >> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnw at newartisans.com Sat Oct 1 23:38:44 2016 From: johnw at newartisans.com (John Wiegley) Date: Sat, 01 Oct 2016 16:38:44 -0700 Subject: Proposal: add Monoid1 and Semigroup1 classes In-Reply-To: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net> ("Mario \=\?utf-8\?B\?Qmxhxb5ldmnEhyIncw\=\=\?\= message of "Fri, 30 Sep 2016 21:09:38 -0400") References: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net> Message-ID: >>>>> "MB" == Mario Blažević writes: MB> If the proposal is to add these directly to base, I'm against it. New MB> classes should first be released in a regular package, and only moved to MB> base once they prove useful. I'd like to second this. I like the ideas, and would like to see them develop; but not in base as the starting place. -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From winterkoninkje at gmail.com Sun Oct 2 02:07:16 2016 From: winterkoninkje at gmail.com (wren romano) Date: Sat, 1 Oct 2016 19:07:16 -0700 Subject: Numeric read seems too strict In-Reply-To: References: Message-ID: On Mon, Sep 12, 2016 at 11:03 AM, David Feuer wrote: > By the way, I believe we should be able to read numbers more efficiently by > parsing them directly instead of lexing first. We have to deal with > parentheses, white space, and signs uniformly for all number types. Then > specialized foldl'-style code *should* be able to parse integral and > fractional numbers faster than any lex-first scheme. I follow the part about parentheses and negations, but I'm not sure I get the rest of what you mean. E.g., I'm not sure how any parser could be faster than what bytestring-lexing does for Fractional and Integral types (ignoring the unoptimized hex and octal functions). What am I missing? -- Live well, ~wren From wren at community.haskell.org Sun Oct 2 02:15:45 2016 From: wren at community.haskell.org (wren romano) Date: Sat, 1 Oct 2016 19:15:45 -0700 Subject: Generalise type of deleteBy In-Reply-To: <1473639103.6084.3.camel@joachim-breitner.de> References: <1473639103.6084.3.camel@joachim-breitner.de> Message-ID: On Sun, Sep 11, 2016 at 5:11 PM, Joachim Breitner wrote: > Hi, > > Am Sonntag, den 11.09.2016, 11:25 +0100 schrieb Matthew Pickering: >> deleteBy :: (a -> b -> Bool) -> a -> [b] -> [b] > > -1 from me. This makes this different from the usual fooBy pattern, and > the fact this this is possible points to some code smell, namely the > lack of a > > (a -> Bool) -> [a] -> [a] > > function. I agree. I'd much rather see the (a->Bool)->[a]->[a] function as the proper generalization of delete. As far as bikeshedding goes, something like "deleteFirst" would make it clearer how it differs from filter as well as avoiding issues with the fooBy naming convention (though I see there's a deleteFirstsBy which probably ruins our chances of using this name). -- Live well, ~wren From david.feuer at gmail.com Sun Oct 2 03:34:34 2016 From: david.feuer at gmail.com (David Feuer) Date: Sat, 1 Oct 2016 23:34:34 -0400 Subject: Numeric read seems too strict In-Reply-To: References: Message-ID: Instead of scanning first (in lexing) to find the end of the number and then scanning the string again to calculate the number, start to calculate once the first digit appears. On Oct 1, 2016 10:07 PM, "wren romano" wrote: > On Mon, Sep 12, 2016 at 11:03 AM, David Feuer > wrote: > > By the way, I believe we should be able to read numbers more efficiently > by > > parsing them directly instead of lexing first. We have to deal with > > parentheses, white space, and signs uniformly for all number types. Then > > specialized foldl'-style code *should* be able to parse integral and > > fractional numbers faster than any lex-first scheme. > > I follow the part about parentheses and negations, but I'm not sure I > get the rest of what you mean. E.g., I'm not sure how any parser could > be faster than what bytestring-lexing does for Fractional and Integral > types (ignoring the unoptimized hex and octal functions). What am I > missing? > > -- > Live well, > ~wren > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Sun Oct 2 04:07:39 2016 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Sun, 2 Oct 2016 15:07:39 +1100 Subject: Numeric read seems too strict In-Reply-To: References: Message-ID: On 2 October 2016 at 14:34, David Feuer wrote: > Instead of scanning first (in lexing) to find the end of the number and then > scanning the string again to calculate the number, start to calculate once > the first digit appears. As in multiply the current sum by 10 before adding each new digit? > > > On Oct 1, 2016 10:07 PM, "wren romano" wrote: >> >> On Mon, Sep 12, 2016 at 11:03 AM, David Feuer >> wrote: >> > By the way, I believe we should be able to read numbers more efficiently >> > by >> > parsing them directly instead of lexing first. We have to deal with >> > parentheses, white space, and signs uniformly for all number types. Then >> > specialized foldl'-style code *should* be able to parse integral and >> > fractional numbers faster than any lex-first scheme. >> >> I follow the part about parentheses and negations, but I'm not sure I >> get the rest of what you mean. E.g., I'm not sure how any parser could >> be faster than what bytestring-lexing does for Fractional and Integral >> types (ignoring the unoptimized hex and octal functions). What am I >> missing? >> >> -- >> Live well, >> ~wren > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From david.feuer at gmail.com Sun Oct 2 04:26:10 2016 From: david.feuer at gmail.com (David Feuer) Date: Sun, 2 Oct 2016 00:26:10 -0400 Subject: Numeric read seems too strict In-Reply-To: References: Message-ID: Yeah, that. With a paren count and an accumulator and for fractional numbers some care around the decimal point or slash, we can look at each digit just once. Fast/lazy failure would be a pleasant side effect of running a numbers-only process from top to bottom. Yes, Read is supposed to read things that look like Haskell expressions, but it's really not a Haskell parser and pretending it is only hurts. On Oct 2, 2016 12:07 AM, "Ivan Lazar Miljenovic" wrote: > On 2 October 2016 at 14:34, David Feuer wrote: > > Instead of scanning first (in lexing) to find the end of the number and > then > > scanning the string again to calculate the number, start to calculate > once > > the first digit appears. > > As in multiply the current sum by 10 before adding each new digit? > > > > > > > On Oct 1, 2016 10:07 PM, "wren romano" wrote: > >> > >> On Mon, Sep 12, 2016 at 11:03 AM, David Feuer > >> wrote: > >> > By the way, I believe we should be able to read numbers more > efficiently > >> > by > >> > parsing them directly instead of lexing first. We have to deal with > >> > parentheses, white space, and signs uniformly for all number types. > Then > >> > specialized foldl'-style code *should* be able to parse integral and > >> > fractional numbers faster than any lex-first scheme. > >> > >> I follow the part about parentheses and negations, but I'm not sure I > >> get the rest of what you mean. E.g., I'm not sure how any parser could > >> be faster than what bytestring-lexing does for Fractional and Integral > >> types (ignoring the unoptimized hex and octal functions). What am I > >> missing? > >> > >> -- > >> Live well, > >> ~wren > > > > > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > http://IvanMiljenovic.wordpress.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sun Oct 2 11:55:53 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 2 Oct 2016 13:55:53 +0200 (CEST) Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo Message-ID: A bit off-topic: I tried to send an e-mail to haskell-llvm at projects.haskell.org It was refused, although an e-mail to the list was accepted a month ago. I tried to reach the list maintainer at: haskell-llvm-owner at projects.haskell.org Erik de Castro Lopo Erik de Castro Lopo No success. Any idea how to contact Erik or what is broken at haskell-llvm at projects.haskell.org? From carter.schonwald at gmail.com Sun Oct 2 12:09:25 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 2 Oct 2016 08:09:25 -0400 Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo In-Reply-To: References: Message-ID: Maybe he was at icfp and has been catching up on rest and work in the intervening time. Wait :) On Sunday, October 2, 2016, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > A bit off-topic: > > I tried to send an e-mail to > haskell-llvm at projects.haskell.org > > It was refused, although an e-mail to the list was accepted a month ago. > > I tried to reach the list maintainer at: > haskell-llvm-owner at projects.haskell.org > Erik de Castro Lopo > Erik de Castro Lopo > > No success. Any idea how to contact Erik or what is broken at > haskell-llvm at projects.haskell.org? > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Oct 2 12:10:53 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 2 Oct 2016 08:10:53 -0400 Subject: Numeric read seems too strict In-Reply-To: References: Message-ID: Do we have benchmarks for your proposed change? Does it handle hex and binary formats too ? On Sunday, October 2, 2016, David Feuer wrote: > Yeah, that. With a paren count and an accumulator and for fractional > numbers some care around the decimal point or slash, we can look at each > digit just once. Fast/lazy failure would be a pleasant side effect of > running a numbers-only process from top to bottom. Yes, Read is supposed to > read things that look like Haskell expressions, but it's really not a > Haskell parser and pretending it is only hurts. > > On Oct 2, 2016 12:07 AM, "Ivan Lazar Miljenovic" < > ivan.miljenovic at gmail.com > > wrote: > >> On 2 October 2016 at 14:34, David Feuer > > wrote: >> > Instead of scanning first (in lexing) to find the end of the number and >> then >> > scanning the string again to calculate the number, start to calculate >> once >> > the first digit appears. >> >> As in multiply the current sum by 10 before adding each new digit? >> >> > >> > >> > On Oct 1, 2016 10:07 PM, "wren romano" > > wrote: >> >> >> >> On Mon, Sep 12, 2016 at 11:03 AM, David Feuer > > >> >> wrote: >> >> > By the way, I believe we should be able to read numbers more >> efficiently >> >> > by >> >> > parsing them directly instead of lexing first. We have to deal with >> >> > parentheses, white space, and signs uniformly for all number types. >> Then >> >> > specialized foldl'-style code *should* be able to parse integral and >> >> > fractional numbers faster than any lex-first scheme. >> >> >> >> I follow the part about parentheses and negations, but I'm not sure I >> >> get the rest of what you mean. E.g., I'm not sure how any parser could >> >> be faster than what bytestring-lexing does for Fractional and Integral >> >> types (ignoring the unoptimized hex and octal functions). What am I >> >> missing? >> >> >> >> -- >> >> Live well, >> >> ~wren >> > >> > >> > _______________________________________________ >> > Libraries mailing list >> > Libraries at haskell.org >> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > >> >> >> >> -- >> Ivan Lazar Miljenovic >> Ivan.Miljenovic at gmail.com >> >> http://IvanMiljenovic.wordpress.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sun Oct 2 12:11:10 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 2 Oct 2016 14:11:10 +0200 (CEST) Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo In-Reply-To: References: Message-ID: On Sun, 2 Oct 2016, Carter Schonwald wrote: > Maybe he was at icfp and has been catching up on rest and work in the intervening time.  Wait :) All mails came back as "timed out" or "refused". It is not that I just did not wait long enough. From mle+hs at mega-nerd.com Mon Oct 3 08:31:42 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Mon, 3 Oct 2016 19:31:42 +1100 Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo In-Reply-To: References: Message-ID: <20161003193142.d7d337656974134fb4ca240f@mega-nerd.com> > On Sunday, October 2, 2016, Henning Thielemann < wrote: > > > > > A bit off-topic: > > > > I tried to send an e-mail to > > haskell-llvm at projects.haskell.org > > > > It was refused, although an e-mail to the list was accepted a month ago. > > > > I tried to reach the list maintainer at: > > haskell-llvm-owner at projects.haskell.org > > Erik de Castro Lopo > > Erik de Castro Lopo > > > > No success. Any idea how to contact Erik or what is broken at > > haskell-llvm at projects.haskell.org? Carter Schonwald wrote: > Maybe he was at icfp and has been catching up on rest and work in the > intervening time. Wait :) As Carter suggests, I was indeed at ICFP and then had a week's holday in Japan. In addition, my mail server crashed about a week into a two week trip and I had no way to restart or fix it until I got back home today (which explains timeouts to my personal domain). I have also tried emailing and that does indeed seem broken. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From david.feuer at gmail.com Wed Oct 5 23:02:01 2016 From: david.feuer at gmail.com (David Feuer) Date: Wed, 5 Oct 2016 19:02:01 -0400 Subject: Read for integral types: proposed semantic change Message-ID: I have undertaken[*] to improve the Read instances for a number of types in base. My primary goal is to make things run faster, and my secondary goal is to make things fail faster. The essence of my approach is to abandon the two-stage lex/parse approach in favor of a single-phase parse-only one. The most natural way to do this makes some parsers more lenient. With GHC's current implementation, given readsInt :: String -> [(Int, String)] readsInt = reads we get readsInt "12e" = [(12, "e")] readsInt "12e-" = [(12,"e-")] readsInt "12e-3" = [] readsInt ('a' : undefined) = undefined This is because the Read instance for Int calls a general lexer to produce tokens it then interprets. For "12e-3", it reads a fractional token and rejects this as an Int. For 'a': undefined, it attempts to find the undefined end of the token before coming to the obvious conclusion that it's not a number. For reasons I explain in the ticket, this classical two-phase model is inappropriate for Read--we get all its disadvantages and none of its advantages. The natural improvement makes reading Int around seven times as fast, but it changes the semantics a bit: readsInt "12e" = [(12, "e")] --same readsInt "12e-" = [(12,"e-")] --same readsInt "12e-3" = [12,"e-3"] --more lenient readsInt ('a' : undefined) = [] --lazier As I understand it, GHC's current semantics are different from those of the Haskell 98 reference implementation, and mine come closer to the standard. That said, this would be a breaking change, so the CLC's input would be very helpful. The alternative would be to bend over backwards to approximate the current semantics by looking past the end of an Int to see if it could look fractional. I don't much care for the non-monotone nature of the current semantics, so I don't think we should go to such lengths to preserve them. [*] https://ghc.haskell.org/trac/ghc/ticket/12665 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Thu Oct 6 11:25:48 2016 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 6 Oct 2016 07:25:48 -0400 Subject: space leak in base or mtl In-Reply-To: References: Message-ID: At the least transformers should probably provide the manual overrides for <* and *> for all of the monad transformer data types. That should fix these cases. -Edward On Thu, Oct 6, 2016 at 5:08 AM, Zoran Bosnjak wrote: > Dear base and mtl maintainers, > I would like to report a memory leak problem (not sure which haskell > component) when using "forever" in combination with "readerT" or "stateT". > Simple test program to reproduce the problem: > --- > import Control.Concurrent > import Control.Monad > import Control.Monad.Trans > import Control.Monad.Trans.Reader > import Control.Monad.Trans.State > > main :: IO () > main = do > -- no leak when using loop1 instead of loop2 > --let loop1 = (liftIO $ threadDelay 1) >> loop1 > let loop2 = forever (liftIO $ threadDelay 1) > > _ <- runStateT (runReaderT loop2 'a') 'b' > return () > --- > > I have asked on haskell-cafe, but the analysis is above my haskell > knowledge: > https://mail.haskell.org/pipermail/haskell-cafe/2016-October/125176.html > https://mail.haskell.org/pipermail/haskell-cafe/2016-October/125177.html > https://mail.haskell.org/pipermail/haskell-cafe/2016-October/125178.html > > regards, > Zoran > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sat Oct 8 08:17:07 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sat, 8 Oct 2016 10:17:07 +0200 (CEST) Subject: Read for integral types: proposed semantic change In-Reply-To: References: Message-ID: On Wed, 5 Oct 2016, David Feuer wrote: > readsInt "12e" = [(12, "e")] --same > readsInt "12e-" = [(12,"e-")] --same > readsInt "12e-3" = [12,"e-3"] --more lenient > readsInt ('a' : undefined) = [] --lazier Sounds reasonable. I do not think that I ever used these partial parses intentionally. From winterkoninkje at gmail.com Sun Oct 9 04:56:07 2016 From: winterkoninkje at gmail.com (wren romano) Date: Sat, 8 Oct 2016 21:56:07 -0700 Subject: Numeric read seems too strict In-Reply-To: References: Message-ID: On Sat, Oct 1, 2016 at 8:34 PM, David Feuer wrote: > Instead of scanning first (in lexing) to find the end of the number and then > scanning the string again to calculate the number, start to calculate once > the first digit appears. Ah, yes. bytestring-lexing does that (among numerous other things). It does save a second pass over the characters, but I'm not sure what proportion of the total slowdown of typical parser combinators is actually due to the second pass, as opposed to other problems with the typical "how hard can it be" lexers/parsers people knock out. Given the multitude of other problems (e.g., using Integer or other expensive types throughout the computation, not forcing things often enough to prevent thunks and stack depth, etc), I'm not sure it's legit to call it a "parser vs lexer" issue. -- Live well, ~wren From winterkoninkje at gmail.com Sun Oct 9 05:08:46 2016 From: winterkoninkje at gmail.com (wren romano) Date: Sat, 8 Oct 2016 22:08:46 -0700 Subject: Read for integral types: proposed semantic change In-Reply-To: References: Message-ID: I totally agree that the desired semantics should be: > readsInt "12e-3" = [12,"e-3"] -- yes, there is an int at the beginning of the string. > readsInt ('a' : ...) = [] -- yes, we can tell there isn't an int at the beginning of the string. In terms of algorithms for parsing things efficiently as well as failing fast, I highly recommend looking at what bytestring-lexing does. Though the implementations there read in ByteStrings, there's nothing about the algorithms that depends on that representation. Getting efficient (and correct!) parsing for Fractional types is quite a lot more complicated than it looks on the surface. -- Live well, ~wren From david.feuer at gmail.com Sun Oct 9 05:12:05 2016 From: david.feuer at gmail.com (David Feuer) Date: Sun, 9 Oct 2016 01:12:05 -0400 Subject: Numeric read seems too strict In-Reply-To: References: Message-ID: The second pass can be a huge problem for a failure. For example, when parsing a long string, we first look for an open parenthesis. To do so, we lex the next token. Supposing that's the string itself, we conclude that it is a string and not an open parenthesis, so we *throw it away and start over*. I hope to get the paren issue fixed for good at Hac Phi. Aside from that rather prominent one, I don't know how much the double scanning hurts, but it certainly can't *help* in most cases. In a more typical parsing situation, the parser would consume a stream of tokens instead of a list of characters. We can't do that here. On Oct 9, 2016 12:56 AM, "wren romano" wrote: > > On Sat, Oct 1, 2016 at 8:34 PM, David Feuer wrote: > > Instead of scanning first (in lexing) to find the end of the number and then > > scanning the string again to calculate the number, start to calculate once > > the first digit appears. > > Ah, yes. bytestring-lexing does that (among numerous other things). It > does save a second pass over the characters, but I'm not sure what > proportion of the total slowdown of typical parser combinators is > actually due to the second pass, as opposed to other problems with the > typical "how hard can it be" lexers/parsers people knock out. Given > the multitude of other problems (e.g., using Integer or other > expensive types throughout the computation, not forcing things often > enough to prevent thunks and stack depth, etc), I'm not sure it's > legit to call it a "parser vs lexer" issue. > > -- > Live well, > ~wren > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sun Oct 9 05:15:17 2016 From: david.feuer at gmail.com (David Feuer) Date: Sun, 9 Oct 2016 01:15:17 -0400 Subject: Read for integral types: proposed semantic change In-Reply-To: References: Message-ID: I will surely take a look. Bertram Felgenhauer has come up with a nice improvement to the Integer building algorithm; I don't know if you have something better. Fractional stuff indeed looks very tricky; the current code, however, seems unlikely to be taking the right approach. On Oct 9, 2016 1:08 AM, "wren romano" wrote: > I totally agree that the desired semantics should be: > > > readsInt "12e-3" = [12,"e-3"] -- yes, there is an int at the beginning > of the string. > > readsInt ('a' : ...) = [] -- yes, we can tell there isn't an int at the > beginning of the string. > > > In terms of algorithms for parsing things efficiently as well as > failing fast, I highly recommend looking at what bytestring-lexing > does. Though the implementations there read in ByteStrings, there's > nothing about the algorithms that depends on that representation. > Getting efficient (and correct!) parsing for Fractional types is quite > a lot more complicated than it looks on the surface. > > -- > Live well, > ~wren > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From twanvl at gmail.com Sat Oct 15 00:54:46 2016 From: twanvl at gmail.com (Twan van Laarhoven) Date: Sat, 15 Oct 2016 02:54:46 +0200 Subject: Primitive package on hackage out of date? Message-ID: <6662a598-cf45-8bc5-6851-09c1d8518683@gmail.com> Hi libraries list, It seems that the `primitive` package on Hackage is out of date. The version on github has more functions such as `sizeofArray`, as well as support for SmallArrays. Could the new version be uploaded to Hackage? Or is there a reason why it can't be? The version number is the same for both though (0.6.1.0), so the cabal file should probably be updated first. Twan From carter.schonwald at gmail.com Sat Oct 15 16:39:21 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 15 Oct 2016 12:39:21 -0400 Subject: Primitive package on hackage out of date? In-Reply-To: <6662a598-cf45-8bc5-6851-09c1d8518683@gmail.com> References: <6662a598-cf45-8bc5-6851-09c1d8518683@gmail.com> Message-ID: I think Dan and I will try to sync up at hacphi and evaluate what's needed before a new primitive release. Just because a feature is in master doesn't mean it's ready for a release :) On Friday, October 14, 2016, Twan van Laarhoven wrote: > Hi libraries list, > > > It seems that the `primitive` package on Hackage is out of date. The > version on github has more functions such as `sizeofArray`, as well as > support for SmallArrays. > > Could the new version be uploaded to Hackage? Or is there a reason why it > can't be? > > The version number is the same for both though (0.6.1.0), so the cabal > file should probably be updated first. > > > Twan > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.doel at gmail.com Sun Oct 16 05:21:02 2016 From: dan.doel at gmail.com (Dan Doel) Date: Sun, 16 Oct 2016 01:21:02 -0400 Subject: Primitive package on hackage out of date? In-Reply-To: <6662a598-cf45-8bc5-6851-09c1d8518683@gmail.com> References: <6662a598-cf45-8bc5-6851-09c1d8518683@gmail.com> Message-ID: I can make a release soon, I think. But I'd like to get the ArrayArray# wrapper I've been working on in first. I had also been working on wrapping MVars, but I guess that can wait. -- Dan On Fri, Oct 14, 2016 at 8:54 PM, Twan van Laarhoven wrote: > Hi libraries list, > > > It seems that the `primitive` package on Hackage is out of date. The > version on github has more functions such as `sizeofArray`, as well as > support for SmallArrays. > > Could the new version be uploaded to Hackage? Or is there a reason why it > can't be? > > The version number is the same for both though (0.6.1.0), so the cabal > file should probably be updated first. > > > Twan > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Oct 16 13:19:41 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 16 Oct 2016 09:19:41 -0400 Subject: Primitive package on hackage out of date? In-Reply-To: References: <6662a598-cf45-8bc5-6851-09c1d8518683@gmail.com> Message-ID: I actually would really like the mvar stuff myself. Some library stuff I've been writing would benefit from mvars in primitive If you're at hacphi I'm happy to help with that On Sunday, October 16, 2016, Dan Doel wrote: > I can make a release soon, I think. But I'd like to get the ArrayArray# > wrapper I've been working on in first. I had also been working on wrapping > MVars, but I guess that can wait. > > -- Dan > > On Fri, Oct 14, 2016 at 8:54 PM, Twan van Laarhoven > wrote: > >> Hi libraries list, >> >> >> It seems that the `primitive` package on Hackage is out of date. The >> version on github has more functions such as `sizeofArray`, as well as >> support for SmallArrays. >> >> Could the new version be uploaded to Hackage? Or is there a reason why it >> can't be? >> >> The version number is the same for both though (0.6.1.0), so the cabal >> file should probably be updated first. >> >> >> Twan >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.thaddeus at gmail.com Sun Oct 16 14:21:44 2016 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Sun, 16 Oct 2016 10:21:44 -0400 Subject: Adding Fixed Point Data Types to base Message-ID: I would like to propose export Fix, Free, and Cofree from a new module in base (maybe something like Data.Functor.Recursion, the name is unimportant). The definitions of Free and Cofree would be as there are in the `free` package in Control.Monad.Free and Control.Monad.Cofree. The `free` package would then reexport these types in their respective modules rather than redefining them. All of the other utilities functions from these packages would not be moved to `base`; they would remain in `free`. Fix would be defined as: newtype Fix f = Fix { unFix :: f (Fix f) } The advantage this offers is that Free and Cofree would be able to enjoy a greater number of typeclass instances provided libraries across the ecosystem. As it stands, adding the somewhat heavy `free` dependency is not a good choice for libraries like `aeson`, `mustache`, and `hashable`. In the case of Fix, the ecosystem currently lacks a canonical library that provides it (recursion-schemes and data-fix both offer the same definition though, and various tutorials all define it the same way). It could benefit from the new instances as well. The disadvantage is the usual disadvantage: Adding any data type to base is a future commitment to that data type, and compile times for base go up a little, and there is work to be done to move it into base to begin with. I would gladly help with any of the work that needs to be done to make this happen. I believe that Fix and Free (and Cofree to a lesser extent) have proved themselves over years of use in the ecosystem. I would appreciate any feedback or thoughts that others have on this topic. Thanks. -- -Andrew Thaddeus Martin From mike at barrucadu.co.uk Sun Oct 16 14:30:39 2016 From: mike at barrucadu.co.uk (Michael Walker) Date: Sun, 16 Oct 2016 15:30:39 +0100 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: > All of the other utilities functions from these packages would not be moved to `base`; they would > remain in `free` Importing the types without the functions seems like people would still end up depending on `free` to get those (or copy/paste the definitions). I can understand not wanting to pull the `MonadFree` typeclass and related functions into `base`, but there are a lot of functions in `free` which don't depend on that but are still useful. -- Michael Walker (http://www.barrucadu.co.uk) From andrew.thaddeus at gmail.com Sun Oct 16 15:37:38 2016 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Sun, 16 Oct 2016 11:37:38 -0400 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: I am not opposed to bringing over the helper functions as well. Those are very useful, and I almost always need to tear down `Free` with `iter` or `iterA`. I was trying to keep the scope of the changes down to just the data types for several reasons. - The data types are the only types that library maintainers actually need in order to provide instances. - Some of the helper functions (like _Pure and _Free) have a Choice constraint and require a `profunctors` dependency. As you mention, MonadFree is probably not a good candidate for `base` either. So, to not break people's stuff, we would have to leave `Control.Monad.Free` in `free` (which would basically reexport stuff and provide _Pure and _Free), and we would have to add a module to `base`, something like `Data.Fixed.Free`, which provides definitions for all the functions. Usually, when modules are moved into `base`, it seems like people just move the whole module, but that couldn't be done here. - iterA vs iterM is a relic from the pre-AMP world. It may bother some people to put things from a bygone age into `base`. - Since there is no canonical library that provides `Fix`, we have to figure out which utilities go in there (cata,ana,para,etc.) Personally, I would be happy to see some of those utility functions in base. But, it makes the path forward a little less clear, and it makes it more likely that this proposal will stall. I would love to hear more thoughts on how to make that happen as long as they have some detail to them, detail that address at least the issues that I brought up. I'll end by reiterating that in my mind, the most important thing is getting the data types into base. Anything else is icing on the cake. On 10/16/16, Michael Walker wrote: >> All of the other utilities functions from these packages would not be >> moved to `base`; they would >> remain in `free` > > Importing the types without the functions seems like people would > still end up depending on `free` to get those (or copy/paste the > definitions). I can understand not wanting to pull the `MonadFree` > typeclass and related functions into `base`, but there are a lot of > functions in `free` which don't depend on that but are still useful. > > -- > Michael Walker (http://www.barrucadu.co.uk) > -- -Andrew Thaddeus Martin From ruben.astud at gmail.com Sun Oct 16 18:49:07 2016 From: ruben.astud at gmail.com (Ruben Astudillo) Date: Sun, 16 Oct 2016 15:49:07 -0300 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: <9a4cc0cb-afd6-2033-3872-e2ee2165a836@gmail.com> On 16/10/16 11:21, Andrew Martin wrote: > The advantage this offers is that Free and Cofree would be able to > enjoy a greater number of typeclass instances provided libraries > across the ecosystem. As it stands, adding the somewhat heavy `free` > dependency is not a good choice for libraries like `aeson`, > `mustache`, and `hashable`. In the case of Fix, the ecosystem > currently lacks a canonical library that provides it > (recursion-schemes and data-fix both offer the same definition though, > and various tutorials all define it the same way). It could benefit > from the new instances as well. This is a problem I see a lot, having to pull a "heavy" dependency to implement an instance and avoid orphans. I don't favor making a monolith in `base` for this, it goes against modularity and is a maintainer risk for base. Can cabal flags + CPP be abused to implement the instances on demand if the constrains are meet?. That would solve this problem at the cost of having to remember package-flags. -- Ruben Astudillo From andrew.thaddeus at gmail.com Sun Oct 16 19:27:12 2016 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Sun, 16 Oct 2016 15:27:12 -0400 Subject: Adding Fixed Point Data Types to base In-Reply-To: <9a4cc0cb-afd6-2033-3872-e2ee2165a836@gmail.com> References: <9a4cc0cb-afd6-2033-3872-e2ee2165a836@gmail.com> Message-ID: Thanks for weighing in. It is possible to use cabal flags + CPP to work around this, although I don't like doing that for the reasons you would expect. I'm also not typically comfortable opening up an issue to ask a package maintainer to add a flag to enable conditional dependencies for instances. I believe that there is precedent for pulling data types that offer a good abstraction into base (ie, moving Data.Functor.* from transformers to base, moving Bifunctor into base), but I don't know exactly what threshold needs to be cleared to make a compelling argument for doing this. -Andrew Martin On Sun, Oct 16, 2016 at 2:49 PM, Ruben Astudillo wrote: > On 16/10/16 11:21, Andrew Martin wrote: > > The advantage this offers is that Free and Cofree would be able to > > enjoy a greater number of typeclass instances provided libraries > > across the ecosystem. As it stands, adding the somewhat heavy `free` > > dependency is not a good choice for libraries like `aeson`, > > `mustache`, and `hashable`. In the case of Fix, the ecosystem > > currently lacks a canonical library that provides it > > (recursion-schemes and data-fix both offer the same definition though, > > and various tutorials all define it the same way). It could benefit > > from the new instances as well. > > This is a problem I see a lot, having to pull a "heavy" dependency to > implement an instance and avoid orphans. I don't favor making a monolith > in `base` for this, it goes against modularity and is a maintainer risk > for base. Can cabal flags + CPP be abused to implement the instances on > demand if the constrains are meet?. That would solve this problem at the > cost of having to remember package-flags. > > -- Ruben Astudillo > -- -Andrew Thaddeus Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sun Oct 16 19:34:24 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Oct 2016 21:34:24 +0200 (CEST) Subject: Adding Fixed Point Data Types to base In-Reply-To: References: <9a4cc0cb-afd6-2033-3872-e2ee2165a836@gmail.com> Message-ID: On Sun, 16 Oct 2016, Andrew Martin wrote: > Thanks for weighing in. It is possible to use cabal flags + CPP to work > around this, although I don't like doing that for the reasons you would > expect. I'm also not typically comfortable opening up an issue to ask a > package maintainer to add a flag to enable conditional dependencies for > instances. Conditionally compiling instances into a package is not an option. If another package imports a certain version of your package it can expect the availability of all instances. There is no other way for a package to assert certain instances other than specifying the package version. From andrew.thaddeus at gmail.com Sun Oct 16 20:10:37 2016 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Sun, 16 Oct 2016 16:10:37 -0400 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: <9a4cc0cb-afd6-2033-3872-e2ee2165a836@gmail.com> Message-ID: Like I said, I don't like doing things like this for the reasons that you would expect. I don't like that it breaks the PVP, makes running your test suite more difficult, and makes your haddocks not include information that might be available. But to get back to the original proposal, what are your thoughts on providing Free, Cofree, and Fix in base? I gather from your previous comment that you would prefer anything to conditional instances, but do you find that these recursive data types are commonly used enough to merit inclusion in base? On Sun, Oct 16, 2016 at 3:34 PM, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Sun, 16 Oct 2016, Andrew Martin wrote: > > Thanks for weighing in. It is possible to use cabal flags + CPP to work >> around this, although I don't like doing that for the reasons you would >> expect. I'm also not typically comfortable opening up an issue to ask a >> package maintainer to add a flag to enable conditional dependencies for >> instances. >> > > Conditionally compiling instances into a package is not an option. If > another package imports a certain version of your package it can expect the > availability of all instances. There is no other way for a package to > assert certain instances other than specifying the package version. > -- -Andrew Thaddeus Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sun Oct 16 20:13:37 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Oct 2016 22:13:37 +0200 (CEST) Subject: Adding Fixed Point Data Types to base In-Reply-To: References: <9a4cc0cb-afd6-2033-3872-e2ee2165a836@gmail.com> Message-ID: On Sun, 16 Oct 2016, Andrew Martin wrote: > Like I said, I don't like doing things like this for the reasons that you would expect. I don't like that it > breaks the PVP, makes running your test suite more difficult, and makes your haddocks not include information > that might be available. But to get back to the original proposal, what are your thoughts on providing Free, > Cofree, and Fix in base? I gather from your previous comment that you would prefer anything to conditional > instances, but do you find that these recursive data types are commonly used enough to merit inclusion in base? I haven't used them so far. From johnw at newartisans.com Mon Oct 17 18:31:53 2016 From: johnw at newartisans.com (John Wiegley) Date: Mon, 17 Oct 2016 11:31:53 -0700 Subject: Adding Fixed Point Data Types to base In-Reply-To: (Andrew Martin's message of "Sun, 16 Oct 2016 10:21:44 -0400") References: Message-ID: >>>>> "AM" == Andrew Martin writes: AM> I would gladly help with any of the work that needs to be done to make AM> this happen. I believe that Fix and Free (and Cofree to a lesser extent) AM> have proved themselves over years of use in the ecosystem. I would AM> appreciate any feedback or thoughts that others have on this topic. What advantage is there to having them in base, rather than living in the 'free' package as they do now? -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From andrew.thaddeus at gmail.com Tue Oct 18 15:34:58 2016 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Tue, 18 Oct 2016 11:34:58 -0400 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: The advantages I outlined were: The advantage this offers is that Free and Cofree would be able to enjoy a greater number of typeclass instances provided libraries across the ecosystem. As it stands, adding the somewhat heavy `free` dependency is not a good choice for libraries like `aeson`, `mustache`, and `hashable`. In the case of Fix, the ecosystem currently lacks a canonical library that provides it (recursion-schemes and data-fix both offer the same definition though, and various tutorials all define it the same way). It could benefit from the new instances as well. On Mon, Oct 17, 2016 at 2:31 PM, John Wiegley wrote: > >>>>> "AM" == Andrew Martin writes: > > AM> I would gladly help with any of the work that needs to be done to make > AM> this happen. I believe that Fix and Free (and Cofree to a lesser > extent) > AM> have proved themselves over years of use in the ecosystem. I would > AM> appreciate any feedback or thoughts that others have on this topic. > > What advantage is there to having them in base, rather than living in the > 'free' package as they do now? > > -- > John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F > http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 > -- -Andrew Thaddeus Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Thu Oct 20 16:02:30 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 20 Oct 2016 09:02:30 -0700 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: I am against extending `base`, as the functionality is already available outside it---a smaller `base` is easier to maintain, and gives us more room to evolve and change things. If the `free` package is considered "too heavy" of a dependency, then perhaps that package should be split into multiple smaller packages that provide the required functionality. -Iavor On Tue, Oct 18, 2016 at 8:34 AM, Andrew Martin wrote: > The advantages I outlined were: > > The advantage this offers is that Free and Cofree would be able to > enjoy a greater number of typeclass instances provided libraries > across the ecosystem. As it stands, adding the somewhat heavy `free` > dependency is not a good choice for libraries like `aeson`, > `mustache`, and `hashable`. In the case of Fix, the ecosystem > currently lacks a canonical library that provides it > (recursion-schemes and data-fix both offer the same definition though, > and various tutorials all define it the same way). It could benefit > from the new instances as well. > > > > On Mon, Oct 17, 2016 at 2:31 PM, John Wiegley > wrote: > >> >>>>> "AM" == Andrew Martin writes: >> >> AM> I would gladly help with any of the work that needs to be done to make >> AM> this happen. I believe that Fix and Free (and Cofree to a lesser >> extent) >> AM> have proved themselves over years of use in the ecosystem. I would >> AM> appreciate any feedback or thoughts that others have on this topic. >> >> What advantage is there to having them in base, rather than living in the >> 'free' package as they do now? >> >> -- >> John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F >> http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 >> > > > > -- > -Andrew Thaddeus Martin > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davean at xkcd.com Thu Oct 20 16:16:51 2016 From: davean at xkcd.com (davean) Date: Thu, 20 Oct 2016 12:16:51 -0400 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: As a user of both mustache and aeson, I would be very happy for them to add a dependency on free. It seems like the ideal solution to me. On Thu, Oct 20, 2016 at 12:02 PM, Iavor Diatchki wrote: > I am against extending `base`, as the functionality is already available > outside it---a smaller `base` is easier to maintain, and gives us more room > to evolve and change things. If the `free` package is considered "too > heavy" of a dependency, then perhaps that package should be split into > multiple smaller packages that provide the required functionality. > > -Iavor > > On Tue, Oct 18, 2016 at 8:34 AM, Andrew Martin > wrote: > >> The advantages I outlined were: >> >> The advantage this offers is that Free and Cofree would be able to >> enjoy a greater number of typeclass instances provided libraries >> across the ecosystem. As it stands, adding the somewhat heavy `free` >> dependency is not a good choice for libraries like `aeson`, >> `mustache`, and `hashable`. In the case of Fix, the ecosystem >> currently lacks a canonical library that provides it >> (recursion-schemes and data-fix both offer the same definition though, >> and various tutorials all define it the same way). It could benefit >> from the new instances as well. >> >> >> >> On Mon, Oct 17, 2016 at 2:31 PM, John Wiegley >> wrote: >> >>> >>>>> "AM" == Andrew Martin writes: >>> >>> AM> I would gladly help with any of the work that needs to be done to >>> make >>> AM> this happen. I believe that Fix and Free (and Cofree to a lesser >>> extent) >>> AM> have proved themselves over years of use in the ecosystem. I would >>> AM> appreciate any feedback or thoughts that others have on this topic. >>> >>> What advantage is there to having them in base, rather than living in the >>> 'free' package as they do now? >>> >>> -- >>> John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F >>> http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 >>> >> >> >> >> -- >> -Andrew Thaddeus Martin >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.thaddeus at gmail.com Thu Oct 20 16:21:02 2016 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Thu, 20 Oct 2016 12:21:02 -0400 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: Is there some kind of general rule for when base should absorb things. In the last several years, it's pulled in Data.Functor.Identity, Data.Functor.Compose, Eq1, Ord1, Bifunctor. Is there any kind set of criteria these had to meet to be pulled in? Sent from my iPhone > On Oct 20, 2016, at 12:02 PM, Iavor Diatchki wrote: > > I am against extending `base`, as the functionality is already available outside it---a smaller `base` is easier to maintain, and gives us more room to evolve and change things. If the `free` package is considered "too heavy" of a dependency, then perhaps that package should be split into multiple smaller packages that provide the required functionality. > > -Iavor > >> On Tue, Oct 18, 2016 at 8:34 AM, Andrew Martin wrote: >> The advantages I outlined were: >> The advantage this offers is that Free and Cofree would be able to >> enjoy a greater number of typeclass instances provided libraries >> across the ecosystem. As it stands, adding the somewhat heavy `free` >> dependency is not a good choice for libraries like `aeson`, >> `mustache`, and `hashable`. In the case of Fix, the ecosystem >> currently lacks a canonical library that provides it >> (recursion-schemes and data-fix both offer the same definition though, >> and various tutorials all define it the same way). It could benefit >> from the new instances as well. >> >> >>> On Mon, Oct 17, 2016 at 2:31 PM, John Wiegley wrote: >>> >>>>> "AM" == Andrew Martin writes: >>> >>> AM> I would gladly help with any of the work that needs to be done to make >>> AM> this happen. I believe that Fix and Free (and Cofree to a lesser extent) >>> AM> have proved themselves over years of use in the ecosystem. I would >>> AM> appreciate any feedback or thoughts that others have on this topic. >>> >>> What advantage is there to having them in base, rather than living in the >>> 'free' package as they do now? >>> >>> -- >>> John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F >>> http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 >> >> >> >> -- >> -Andrew Thaddeus Martin >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnw at newartisans.com Thu Oct 20 17:41:12 2016 From: johnw at newartisans.com (John Wiegley) Date: Thu, 20 Oct 2016 10:41:12 -0700 Subject: Adding Fixed Point Data Types to base In-Reply-To: (Andrew Martin's message of "Thu, 20 Oct 2016 12:21:02 -0400") References: Message-ID: >>>>> "AM" == Andrew Martin writes: AM> Is there some kind of general rule for when base should absorb things. In AM> the last several years, it's pulled in Data.Functor.Identity, AM> Data.Functor.Compose, Eq1, Ord1, Bifunctor. Is there any kind set of AM> criteria these had to meet to be pulled in? A principle I've inferred is that it goes into base if (a) it's foundational, and (b) there's really just one way to express the principle, rather than multiple ways with inherent trade-offs between them. For example, Free is known to have significant costs, which are ameliorated (though made worse in the other direction) by its tagless encoding. Since the two representations are isomorphic, it becomes strange for base to canonize one over the other; but with Data.Functor.Identity, there is no such contention. -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From evan at evanrutledgeborden.dreamhosters.com Thu Oct 20 19:12:32 2016 From: evan at evanrutledgeborden.dreamhosters.com (evan@evan-borden.com) Date: Thu, 20 Oct 2016 15:12:32 -0400 Subject: Generic Data.List.partition Message-ID: I've been curious in the past why Data.List.partition has not found a more generic implementation. I always assumed this was because of performance. I've given the performance hypothesis a test and it seems that a generic implementation outperforms the list implementation. I'm not terribly sure why this is the case, but I also haven't dumped core. Implementation and bench: https://github.com/eborden/partition Should this function be made generic? This implementation is surely leveraging the perf characteristics of the Monoid instance of List. Other types could have terrible performance. - Evan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Thu Oct 20 19:18:08 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Thu, 20 Oct 2016 21:18:08 +0200 (CEST) Subject: Generic Data.List.partition In-Reply-To: References: Message-ID: On Thu, 20 Oct 2016, evan at evan-borden.com wrote: > I've given the performance hypothesis a test and it seems that a generic implementation outperforms the list > implementation. I'm not terribly sure why this is the case, but I also haven't dumped core. > > Implementation and bench: https://github.com/eborden/partition I do not propose to add this to 'base', but if you are after a generic implementation why is the input container type the same as the output type (both 't')? Why not using Alternative.<|> instead of Monoid.<> ? From evan at evanrutledgeborden.dreamhosters.com Thu Oct 20 19:41:00 2016 From: evan at evanrutledgeborden.dreamhosters.com (evan@evan-borden.com) Date: Thu, 20 Oct 2016 15:41:00 -0400 Subject: Generic Data.List.partition In-Reply-To: References: Message-ID: I'm not sure I propose adding it to base either :) Relaxing `t` as the output is certainly correct. I'm not sure if there is a correct choice between Alternative or Monoid. That would be an indicator that a generic function of this form is arbitrarily opinionated. On Thu, Oct 20, 2016 at 3:18 PM, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Thu, 20 Oct 2016, evan at evan-borden.com wrote: > > I've given the performance hypothesis a test and it seems that a generic >> implementation outperforms the list >> implementation. I'm not terribly sure why this is the case, but I also >> haven't dumped core. >> >> Implementation and bench: https://github.com/eborden/partition >> > > I do not propose to add this to 'base', but if you are after a generic > implementation why is the input container type the same as the output type > (both 't')? Why not using Alternative.<|> instead of Monoid.<> ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Thu Oct 20 20:53:58 2016 From: spam at scientician.net (Bardur Arantsson) Date: Thu, 20 Oct 2016 22:53:58 +0200 Subject: Adding Fixed Point Data Types to base In-Reply-To: References: Message-ID: On 2016-10-20 19:41, John Wiegley wrote: >>>>>> "AM" == Andrew Martin writes: > > AM> Is there some kind of general rule for when base should absorb things. In > AM> the last several years, it's pulled in Data.Functor.Identity, > AM> Data.Functor.Compose, Eq1, Ord1, Bifunctor. Is there any kind set of > AM> criteria these had to meet to be pulled in? > > A principle I've inferred is that it goes into base if (a) it's foundational, > and (b) there's really just one way to express the principle, rather than > multiple ways with inherent trade-offs between them. > > For example, Free is known to have significant costs, which are ameliorated > (though made worse in the other direction) by its tagless encoding. Since the > two representations are isomorphic, it becomes strange for base to canonize > one over the other; but with Data.Functor.Identity, there is no such > contention. > Wasn't this thread about only pulling in Fix (from the 'free' package), or am I missing something? From johnw at newartisans.com Thu Oct 20 23:26:44 2016 From: johnw at newartisans.com (John Wiegley) Date: Thu, 20 Oct 2016 16:26:44 -0700 Subject: Adding Fixed Point Data Types to base In-Reply-To: (Bardur Arantsson's message of "Thu, 20 Oct 2016 22:53:58 +0200") References: Message-ID: >>>>> "BA" == Bardur Arantsson writes: BA> Wasn't this thread about only pulling in Fix (from the 'free' package), or BA> am I missing something? Right; just Fix is OK with me. I would like for that to be in base. -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From david.feuer at gmail.com Thu Oct 20 23:56:01 2016 From: david.feuer at gmail.com (David Feuer) Date: Thu, 20 Oct 2016 19:56:01 -0400 Subject: Generic Data.List.partition In-Reply-To: References: Message-ID: Wild guessing: This seems fairly likely to relate somehow to the potential space leak in certain "optimized" compilations of partition (where the GC can't see and reduce selector application). Your Foldable version may benefit in some cases by not being inlined and therefore not getting the problematic optimization. I don't really know for sure. What is clear is that the GC trick isn't as reliable as one might hope, given its performance. On Oct 20, 2016 3:41 PM, "evan at evan-borden.com" < evan at evanrutledgeborden.dreamhosters.com> wrote: > I'm not sure I propose adding it to base either :) > > Relaxing `t` as the output is certainly correct. I'm not sure if there is > a correct choice between Alternative or Monoid. That would be an indicator > that a generic function of this form is arbitrarily opinionated. > > On Thu, Oct 20, 2016 at 3:18 PM, Henning Thielemann < > lemming at henning-thielemann.de> wrote: > >> >> On Thu, 20 Oct 2016, evan at evan-borden.com wrote: >> >> I've given the performance hypothesis a test and it seems that a generic >>> implementation outperforms the list >>> implementation. I'm not terribly sure why this is the case, but I also >>> haven't dumped core. >>> >>> Implementation and bench: https://github.com/eborden/partition >>> >> >> I do not propose to add this to 'base', but if you are after a generic >> implementation why is the input container type the same as the output type >> (both 't')? Why not using Alternative.<|> instead of Monoid.<> ? >> > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.grenrus at iki.fi Tue Oct 25 15:18:06 2016 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Tue, 25 Oct 2016 18:18:06 +0300 Subject: Few changes to deepseq Message-ID: <589A07C3-6C50-4E4B-8502-FB709DA79809@iki.fi> About two weeks ago i made three small PR to `deepseq` [1,2,3], to resolve three long standing issues [4,5,6] As deepseq is a core package, I bring these issues and pull requests to the libraries@ attention, so the pull requests could be merged or declined, and issues resolved in way or another. - The `rwhnf` [1] helper is straightforward, I hope that it will be accepted as is. - The deeply strict liftM, aka <$!!> [2], is discussed a bit in the issue [5]. I think this is quite straightforward as well. Side note: It's not obvious what this operator does, also neither $!, <$!> or $!! documentation explains them properly. IMHO Improving the documentation of these operators should be done as a separate chore, there is something on wiki [7], but that description is quite sparse as well, but I'm not competent enough to do that. Is there some good resource? - The NFData1 and NFData2 classes. The PR [3] proposes: class NFData1 f where liftRnf :: (a -> ()) -> f a -> () definition. The is also some discussion about `rnf1 :: NFData a => f a -> ()` would be enough. I'm not repeating it here, `liftRnf` is just more powerful. I'll add Generics support, if the addition is approved. For the record, I think that polykinded `NFData` is not a good idea, yet. As this is 3-proposals-in-1, the discussion time is 3 weeks, ending Nov 14. Cheers, Oleg Grenrus, aka phadej - [1]: rwhnf: https://github.com/haskell/deepseq/pull/22 - [2]: deeply strict liftM: https://github.com/haskell/deepseq/pull/23 - [3]: NFData1/2: https://github.com/haskell/deepseq/pull/21 - [4]: rwhnf: https://github.com/haskell/deepseq/issues/3 - [5]: deeply strict liftM: https://github.com/haskell/deepseq/issues/13 - [6]: NFData1 helper https://github.com/haskell/deepseq/issues/8 - [7]: https://wiki.haskell.org/Performance/Strictness#Evaluating_expressions_strictly -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From oleg.grenrus at iki.fi Tue Oct 25 15:28:14 2016 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Tue, 25 Oct 2016 18:28:14 +0300 Subject: Typelevel Symbol concatenation Message-ID: <49E8CBC6-138B-4B5D-9420-8815C2F3C96D@iki.fi> There is an issue #1216: "concatenation of type level symbols missing" [1]. I made a working patch, but we need to figure out the details. As the patch introduces new non-obvious name to the base library, I’m starting a thread in libraries@ to figure out community's opinion. I’m proposing to add `type family (n :: Symbol) <> (m :: Symbol)` to `GHC.TypeLits`. Currently implementation uses (<>). Another options are (++), (+++) or `AppendSymbol`. - (<>) resembles Semigroup operation, as (+) resembles Num operation - (++) is a list appending operation. IMHO it’s a bad choice - (+++) is used by ghc-typelits-symbols plugin - `AppendSymbol` is sensible too, if libraries want to define own versions of type-level (<>) (e.g. polykinded) Discussion period: 2 weeks. Cheers, Oleg Grenrus. - [1] https://ghc.haskell.org/trac/ghc/ticket/12162 - [2] https://github.com/konn/ghc-typelits-symbols/blob/cd812f4cfc2e6816a18283a6a0e9bb2d9ea2905e/src/GHC/TypeLits/Symbols/Internal.hs#L6-L8 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From iavor.diatchki at gmail.com Tue Oct 25 23:15:48 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 25 Oct 2016 16:15:48 -0700 Subject: Typelevel Symbol concatenation In-Reply-To: <49E8CBC6-138B-4B5D-9420-8815C2F3C96D@iki.fi> References: <49E8CBC6-138B-4B5D-9420-8815C2F3C96D@iki.fi> Message-ID: Hello Oleg, nicely done! I wrote the `GHC.TypeLits` module, and the original plan was that it should just provide the basics needed by the compiler, and other libraries would define a nicer user-facing interface. So, with this in mind, `AppendSymbol` makes a lot of sense to me. However, the original plan didn't quite happen, and everyone seems to be using `GHC.TypeLits` directly, so maybe picking a shorter name is a good idea. To me, personally, `(<>)` and `(++)` look the nicest. I agree with you that `(++)` might suggest that the arguments are type-level lists, which they aren't. OTOH, `(<>)` looks a lot like `(:<>:)`, which is also defined in `GHC.TypeLits` and is used for horizontal concatenation of error messages. Cheers, -Iavor On Tue, Oct 25, 2016 at 8:28 AM, Oleg Grenrus wrote: > There is an issue #1216: "concatenation of type level symbols missing" [1]. > > I made a working patch, but we need to figure out the details. > As the patch introduces new non-obvious name to the base library, I’m > starting a thread in libraries@ to figure out community's opinion. > > I’m proposing to add `type family (n :: Symbol) <> (m :: Symbol)` to > `GHC.TypeLits`. > > Currently implementation uses (<>). Another options are (++), (+++) or > `AppendSymbol`. > > - (<>) resembles Semigroup operation, as (+) resembles Num operation > - (++) is a list appending operation. IMHO it’s a bad choice > - (+++) is used by ghc-typelits-symbols plugin > - `AppendSymbol` is sensible too, if libraries want to define own versions > of type-level (<>) (e.g. polykinded) > > Discussion period: 2 weeks. > > Cheers, > Oleg Grenrus. > > - [1] https://ghc.haskell.org/trac/ghc/ticket/12162 > - [2] https://github.com/konn/ghc-typelits-symbols/blob/ > cd812f4cfc2e6816a18283a6a0e9bb2d9ea2905e/src/GHC/TypeLits/ > Symbols/Internal.hs#L6-L8 > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Thu Oct 27 14:02:13 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 27 Oct 2016 10:02:13 -0400 Subject: Few changes to deepseq Message-ID: All three of those proposals seem useful, broadly applicable, and sensibly named, so +1 from me. Ryan S. -------------- next part -------------- An HTML attachment was scrubbed... URL: