From blamario at ciktel.net Sat Oct 1 01:09:38 2016
From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=)
Date: Fri, 30 Sep 2016 21:09:38 -0400
Subject: Proposal: add Monoid1 and Semigroup1 classes
In-Reply-To:
References:
Message-ID: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net>
On 2016-09-30 07:25 PM, David Feuer wrote:
>
> I've been playing around with the idea of writing Haskell 2010 type
> classes for finite sequences and non-empty sequences, somewhat similar
> to Michael Snoyman's Sequence class in mono-traversable. These are
> naturally based on Monoid1 and Semigroup1, which I think belong in base.
>
If the proposal is to add these directly to base, I'm against it.
New classes should first be released in a regular package, and only
moved to base once they prove useful.
> class Semigroup1 f where
> (<<>>) :: f a -> f a -> f a
> class Semigroup1 f => Monoid1 f where
> mempty1 :: f a
>
> Then I can write
>
> class (Monoid1 t, Traversable t) => Sequence t where
> singleton :: a -> t a
> -- and other less-critical methods
>
> class (Semigroup1 t, Traversable1 t) => NESequence where
> singleton1 :: a -> t a
> -- etc.
>
From david.feuer at gmail.com Sat Oct 1 01:49:48 2016
From: david.feuer at gmail.com (David Feuer)
Date: Fri, 30 Sep 2016 21:49:48 -0400
Subject: Proposal: add Monoid1 and Semigroup1 classes
In-Reply-To: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net>
References:
<4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net>
Message-ID:
It seems to me that Data.Functor.Classes is the natural place for these,
but I guess I could stick them somewhere else.
On Sep 30, 2016 9:09 PM, "Mario Blažević" wrote:
> On 2016-09-30 07:25 PM, David Feuer wrote:
>
>>
>> I've been playing around with the idea of writing Haskell 2010 type
>> classes for finite sequences and non-empty sequences, somewhat similar
>> to Michael Snoyman's Sequence class in mono-traversable. These are
>> naturally based on Monoid1 and Semigroup1, which I think belong in base.
>>
>>
> If the proposal is to add these directly to base, I'm against it. New
> classes should first be released in a regular package, and only moved to
> base once they prove useful.
>
>
> class Semigroup1 f where
>> (<<>>) :: f a -> f a -> f a
>> class Semigroup1 f => Monoid1 f where
>> mempty1 :: f a
>>
>> Then I can write
>>
>> class (Monoid1 t, Traversable t) => Sequence t where
>> singleton :: a -> t a
>> -- and other less-critical methods
>>
>> class (Semigroup1 t, Traversable1 t) => NESequence where
>> singleton1 :: a -> t a
>> -- etc.
>>
>>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ekmett at gmail.com Sat Oct 1 08:07:51 2016
From: ekmett at gmail.com (Edward Kmett)
Date: Sat, 1 Oct 2016 04:07:51 -0400
Subject: Proposal: add Monoid1 and Semigroup1 classes
In-Reply-To:
References:
Message-ID:
I'm somewhat weakly against these, simply because they haven't seen broad
adoption in the wild in any of the attempts to introduce them elsewhere,
and they don't quite fit the naming convention of the other Foo1 classes in
Data.Functor.Classes
Eq1 f says more or less that Eq a => Eq (f a).
Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is saying
forall a. (f a) is a semigroup parametrically. Both of these constructions
could be useful, but they ARE different constructions.
If folks had actually been using, say, the Plus and Alt classes from
semigroupoids or the like more or less at all pretty much anywhere, I could
maybe argue towards bringing them up towards base, but I've seen almost
zero adoption of the ideas over multiple years -- and these represent yet
_another_ point in the design space where we talk about semigroupal and
monoidal structures where f is a Functor instead. =/
Many points in the design space, and little demonstrated will for adoption
seems to steers me to think that the community isn't ready to pick one and
enshrine it some place central yet.
Overall, -1.
-Edward
On Fri, Sep 30, 2016 at 7:25 PM, David Feuer wrote:
> I've been playing around with the idea of writing Haskell 2010 type
> classes for finite sequences and non-empty sequences, somewhat similar to
> Michael Snoyman's Sequence class in mono-traversable. These are naturally
> based on Monoid1 and Semigroup1, which I think belong in base.
>
> class Semigroup1 f where
> (<<>>) :: f a -> f a -> f a
> class Semigroup1 f => Monoid1 f where
> mempty1 :: f a
>
> Then I can write
>
> class (Monoid1 t, Traversable t) => Sequence t where
> singleton :: a -> t a
> -- and other less-critical methods
>
> class (Semigroup1 t, Traversable1 t) => NESequence where
> singleton1 :: a -> t a
> -- etc.
>
> I can, of course, just write my own, but I don't think I'm the only one
> using such.
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From blamario at ciktel.net Sat Oct 1 15:08:06 2016
From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=)
Date: Sat, 1 Oct 2016 11:08:06 -0400
Subject: Proposal: add Monoid1 and Semigroup1 classes
In-Reply-To:
References:
Message-ID:
On 2016-10-01 04:07 AM, Edward Kmett wrote:
> I'm somewhat weakly against these, simply because they haven't seen
> broad adoption in the wild in any of the attempts to introduce them
> elsewhere, and they don't quite fit the naming convention of the other
> Foo1 classes in Data.Functor.Classes
>
> Eq1 f says more or less that Eq a => Eq (f a).
>
> Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is
> saying forall a. (f a) is a semigroup parametrically. Both of these
> constructions could be useful, but they ARE different constructions.
The standard fully parametric classes like Functor and Monad have
no suffix at all. It makes sense to reserve the suffix "1" for
non-parametric lifting classes. Can you suggest a different naming
scheme for parametric classes of a higher order?
I'm also guilty of abusing the suffix "1", at least provisionally,
but these are different beasts yet again:
-- | Equivalent of 'Functor' for rank 2 data types
class Functor1 g where
fmap1 :: (forall a. p a -> q a) -> g p -> g q
https://github.com/blamario/grampa/blob/master/Text/Grampa/Classes.hs
What would be a proper suffix here? I guess Functor2 would make
sense, for a rank-2 type?
>
> If folks had actually been using, say, the Plus and Alt classes from
> semigroupoids or the like more or less at all pretty much anywhere, I
> could maybe argue towards bringing them up towards base, but I've seen
> almost zero adoption of the ideas over multiple years -- and these
> represent yet _another_ point in the design space where we talk about
> semigroupal and monoidal structures where f is a Functor instead. =/
>
> Many points in the design space, and little demonstrated will for
> adoption seems to steers me to think that the community isn't ready to
> pick one and enshrine it some place central yet.
>
> Overall, -1.
>
> -Edward
>
> On Fri, Sep 30, 2016 at 7:25 PM, David Feuer > wrote:
>
> I've been playing around with the idea of writing Haskell 2010
> type classes for finite sequences and non-empty sequences,
> somewhat similar to Michael Snoyman's Sequence class in
> mono-traversable. These are naturally based on Monoid1 and
> Semigroup1, which I think belong in base.
>
> class Semigroup1 f where
> (<<>>) :: f a -> f a -> f a
> class Semigroup1 f => Monoid1 f where
> mempty1 :: f a
>
> Then I can write
>
> class (Monoid1 t, Traversable t) => Sequence t where
> singleton :: a -> t a
> -- and other less-critical methods
>
> class (Semigroup1 t, Traversable1 t) => NESequence where
> singleton1 :: a -> t a
> -- etc.
>
> I can, of course, just write my own, but I don't think I'm the
> only one using such.
>
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
>
>
>
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
From ekmett at gmail.com Sat Oct 1 21:26:12 2016
From: ekmett at gmail.com (Edward Kmett)
Date: Sat, 1 Oct 2016 17:26:12 -0400
Subject: Proposal: add Monoid1 and Semigroup1 classes
In-Reply-To:
References:
Message-ID:
Re 2 for rank-2, there is already precedent for using 2 for lifting over
two arguments, so semantic confusion sadly remains:
E.g.
Eq2 p means Eq a, Eq b => Eq (p a b)
or
Eq2 p means Eq a => Eq1 (p a)
-Edward
On Sat, Oct 1, 2016 at 11:08 AM, Mario Blažević wrote:
> On 2016-10-01 04:07 AM, Edward Kmett wrote:
>
>> I'm somewhat weakly against these, simply because they haven't seen broad
>> adoption in the wild in any of the attempts to introduce them elsewhere,
>> and they don't quite fit the naming convention of the other Foo1 classes in
>> Data.Functor.Classes
>>
>> Eq1 f says more or less that Eq a => Eq (f a).
>>
>> Semigroup1 in your proposal makes a stronger claim. Semgiroup1 f is
>> saying forall a. (f a) is a semigroup parametrically. Both of these
>> constructions could be useful, but they ARE different constructions.
>>
>
> The standard fully parametric classes like Functor and Monad have no
> suffix at all. It makes sense to reserve the suffix "1" for non-parametric
> lifting classes. Can you suggest a different naming scheme for parametric
> classes of a higher order?
>
> I'm also guilty of abusing the suffix "1", at least provisionally, but
> these are different beasts yet again:
>
> -- | Equivalent of 'Functor' for rank 2 data types
> class Functor1 g where
> fmap1 :: (forall a. p a -> q a) -> g p -> g q
>
> https://github.com/blamario/grampa/blob/master/Text/Grampa/Classes.hs
>
> What would be a proper suffix here? I guess Functor2 would make sense,
> for a rank-2 type?
>
>
>
>> If folks had actually been using, say, the Plus and Alt classes from
>> semigroupoids or the like more or less at all pretty much anywhere, I could
>> maybe argue towards bringing them up towards base, but I've seen almost
>> zero adoption of the ideas over multiple years -- and these represent yet
>> _another_ point in the design space where we talk about semigroupal and
>> monoidal structures where f is a Functor instead. =/
>>
>> Many points in the design space, and little demonstrated will for
>> adoption seems to steers me to think that the community isn't ready to pick
>> one and enshrine it some place central yet.
>>
>> Overall, -1.
>>
>> -Edward
>>
>> On Fri, Sep 30, 2016 at 7:25 PM, David Feuer > > wrote:
>>
>> I've been playing around with the idea of writing Haskell 2010
>> type classes for finite sequences and non-empty sequences,
>> somewhat similar to Michael Snoyman's Sequence class in
>> mono-traversable. These are naturally based on Monoid1 and
>> Semigroup1, which I think belong in base.
>>
>> class Semigroup1 f where
>> (<<>>) :: f a -> f a -> f a
>> class Semigroup1 f => Monoid1 f where
>> mempty1 :: f a
>>
>> Then I can write
>>
>> class (Monoid1 t, Traversable t) => Sequence t where
>> singleton :: a -> t a
>> -- and other less-critical methods
>>
>> class (Semigroup1 t, Traversable1 t) => NESequence where
>> singleton1 :: a -> t a
>> -- etc.
>>
>> I can, of course, just write my own, but I don't think I'm the
>> only one using such.
>>
>>
>> _______________________________________________
>> Libraries mailing list
>> Libraries at haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>>
>>
>>
>>
>>
>> _______________________________________________
>> Libraries mailing list
>> Libraries at haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>>
>
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From johnw at newartisans.com Sat Oct 1 23:38:44 2016
From: johnw at newartisans.com (John Wiegley)
Date: Sat, 01 Oct 2016 16:38:44 -0700
Subject: Proposal: add Monoid1 and Semigroup1 classes
In-Reply-To: <4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net> ("Mario
\=\?utf-8\?B\?Qmxhxb5ldmnEhyIncw\=\=\?\= message of "Fri, 30 Sep 2016 21:09:38
-0400")
References:
<4375b60a-530a-d5b1-44b6-596e5ae4536b@ciktel.net>
Message-ID:
>>>>> "MB" == Mario Blažević writes:
MB> If the proposal is to add these directly to base, I'm against it. New
MB> classes should first be released in a regular package, and only moved to
MB> base once they prove useful.
I'd like to second this. I like the ideas, and would like to see them develop;
but not in base as the starting place.
--
John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F
http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2
From winterkoninkje at gmail.com Sun Oct 2 02:07:16 2016
From: winterkoninkje at gmail.com (wren romano)
Date: Sat, 1 Oct 2016 19:07:16 -0700
Subject: Numeric read seems too strict
In-Reply-To:
References:
Message-ID:
On Mon, Sep 12, 2016 at 11:03 AM, David Feuer wrote:
> By the way, I believe we should be able to read numbers more efficiently by
> parsing them directly instead of lexing first. We have to deal with
> parentheses, white space, and signs uniformly for all number types. Then
> specialized foldl'-style code *should* be able to parse integral and
> fractional numbers faster than any lex-first scheme.
I follow the part about parentheses and negations, but I'm not sure I
get the rest of what you mean. E.g., I'm not sure how any parser could
be faster than what bytestring-lexing does for Fractional and Integral
types (ignoring the unoptimized hex and octal functions). What am I
missing?
--
Live well,
~wren
From wren at community.haskell.org Sun Oct 2 02:15:45 2016
From: wren at community.haskell.org (wren romano)
Date: Sat, 1 Oct 2016 19:15:45 -0700
Subject: Generalise type of deleteBy
In-Reply-To: <1473639103.6084.3.camel@joachim-breitner.de>
References:
<1473639103.6084.3.camel@joachim-breitner.de>
Message-ID:
On Sun, Sep 11, 2016 at 5:11 PM, Joachim Breitner
wrote:
> Hi,
>
> Am Sonntag, den 11.09.2016, 11:25 +0100 schrieb Matthew Pickering:
>> deleteBy :: (a -> b -> Bool) -> a -> [b] -> [b]
>
> -1 from me. This makes this different from the usual fooBy pattern, and
> the fact this this is possible points to some code smell, namely the
> lack of a
>
> (a -> Bool) -> [a] -> [a]
>
> function.
I agree. I'd much rather see the (a->Bool)->[a]->[a] function as the
proper generalization of delete.
As far as bikeshedding goes, something like "deleteFirst" would make
it clearer how it differs from filter as well as avoiding issues with
the fooBy naming convention (though I see there's a deleteFirstsBy
which probably ruins our chances of using this name).
--
Live well,
~wren
From david.feuer at gmail.com Sun Oct 2 03:34:34 2016
From: david.feuer at gmail.com (David Feuer)
Date: Sat, 1 Oct 2016 23:34:34 -0400
Subject: Numeric read seems too strict
In-Reply-To:
References:
Message-ID:
Instead of scanning first (in lexing) to find the end of the number and
then scanning the string again to calculate the number, start to calculate
once the first digit appears.
On Oct 1, 2016 10:07 PM, "wren romano" wrote:
> On Mon, Sep 12, 2016 at 11:03 AM, David Feuer
> wrote:
> > By the way, I believe we should be able to read numbers more efficiently
> by
> > parsing them directly instead of lexing first. We have to deal with
> > parentheses, white space, and signs uniformly for all number types. Then
> > specialized foldl'-style code *should* be able to parse integral and
> > fractional numbers faster than any lex-first scheme.
>
> I follow the part about parentheses and negations, but I'm not sure I
> get the rest of what you mean. E.g., I'm not sure how any parser could
> be faster than what bytestring-lexing does for Fractional and Integral
> types (ignoring the unoptimized hex and octal functions). What am I
> missing?
>
> --
> Live well,
> ~wren
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ivan.miljenovic at gmail.com Sun Oct 2 04:07:39 2016
From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic)
Date: Sun, 2 Oct 2016 15:07:39 +1100
Subject: Numeric read seems too strict
In-Reply-To:
References:
Message-ID:
On 2 October 2016 at 14:34, David Feuer wrote:
> Instead of scanning first (in lexing) to find the end of the number and then
> scanning the string again to calculate the number, start to calculate once
> the first digit appears.
As in multiply the current sum by 10 before adding each new digit?
>
>
> On Oct 1, 2016 10:07 PM, "wren romano" wrote:
>>
>> On Mon, Sep 12, 2016 at 11:03 AM, David Feuer
>> wrote:
>> > By the way, I believe we should be able to read numbers more efficiently
>> > by
>> > parsing them directly instead of lexing first. We have to deal with
>> > parentheses, white space, and signs uniformly for all number types. Then
>> > specialized foldl'-style code *should* be able to parse integral and
>> > fractional numbers faster than any lex-first scheme.
>>
>> I follow the part about parentheses and negations, but I'm not sure I
>> get the rest of what you mean. E.g., I'm not sure how any parser could
>> be faster than what bytestring-lexing does for Fractional and Integral
>> types (ignoring the unoptimized hex and octal functions). What am I
>> missing?
>>
>> --
>> Live well,
>> ~wren
>
>
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
--
Ivan Lazar Miljenovic
Ivan.Miljenovic at gmail.com
http://IvanMiljenovic.wordpress.com
From david.feuer at gmail.com Sun Oct 2 04:26:10 2016
From: david.feuer at gmail.com (David Feuer)
Date: Sun, 2 Oct 2016 00:26:10 -0400
Subject: Numeric read seems too strict
In-Reply-To:
References:
Message-ID:
Yeah, that. With a paren count and an accumulator and for fractional
numbers some care around the decimal point or slash, we can look at each
digit just once. Fast/lazy failure would be a pleasant side effect of
running a numbers-only process from top to bottom. Yes, Read is supposed to
read things that look like Haskell expressions, but it's really not a
Haskell parser and pretending it is only hurts.
On Oct 2, 2016 12:07 AM, "Ivan Lazar Miljenovic"
wrote:
> On 2 October 2016 at 14:34, David Feuer wrote:
> > Instead of scanning first (in lexing) to find the end of the number and
> then
> > scanning the string again to calculate the number, start to calculate
> once
> > the first digit appears.
>
> As in multiply the current sum by 10 before adding each new digit?
>
> >
> >
> > On Oct 1, 2016 10:07 PM, "wren romano" wrote:
> >>
> >> On Mon, Sep 12, 2016 at 11:03 AM, David Feuer
> >> wrote:
> >> > By the way, I believe we should be able to read numbers more
> efficiently
> >> > by
> >> > parsing them directly instead of lexing first. We have to deal with
> >> > parentheses, white space, and signs uniformly for all number types.
> Then
> >> > specialized foldl'-style code *should* be able to parse integral and
> >> > fractional numbers faster than any lex-first scheme.
> >>
> >> I follow the part about parentheses and negations, but I'm not sure I
> >> get the rest of what you mean. E.g., I'm not sure how any parser could
> >> be faster than what bytestring-lexing does for Fractional and Integral
> >> types (ignoring the unoptimized hex and octal functions). What am I
> >> missing?
> >>
> >> --
> >> Live well,
> >> ~wren
> >
> >
> > _______________________________________________
> > Libraries mailing list
> > Libraries at haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
> >
>
>
>
> --
> Ivan Lazar Miljenovic
> Ivan.Miljenovic at gmail.com
> http://IvanMiljenovic.wordpress.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lemming at henning-thielemann.de Sun Oct 2 11:55:53 2016
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Sun, 2 Oct 2016 13:55:53 +0200 (CEST)
Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo
Message-ID:
A bit off-topic:
I tried to send an e-mail to
haskell-llvm at projects.haskell.org
It was refused, although an e-mail to the list was accepted a month ago.
I tried to reach the list maintainer at:
haskell-llvm-owner at projects.haskell.org
Erik de Castro Lopo
Erik de Castro Lopo
No success. Any idea how to contact Erik or what is broken at
haskell-llvm at projects.haskell.org?
From carter.schonwald at gmail.com Sun Oct 2 12:09:25 2016
From: carter.schonwald at gmail.com (Carter Schonwald)
Date: Sun, 2 Oct 2016 08:09:25 -0400
Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo
In-Reply-To:
References:
Message-ID:
Maybe he was at icfp and has been catching up on rest and work in the
intervening time. Wait :)
On Sunday, October 2, 2016, Henning Thielemann <
lemming at henning-thielemann.de> wrote:
>
> A bit off-topic:
>
> I tried to send an e-mail to
> haskell-llvm at projects.haskell.org
>
> It was refused, although an e-mail to the list was accepted a month ago.
>
> I tried to reach the list maintainer at:
> haskell-llvm-owner at projects.haskell.org
> Erik de Castro Lopo
> Erik de Castro Lopo
>
> No success. Any idea how to contact Erik or what is broken at
> haskell-llvm at projects.haskell.org?
> _______________________________________________
> Libraries mailing list
> Libraries at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From carter.schonwald at gmail.com Sun Oct 2 12:10:53 2016
From: carter.schonwald at gmail.com (Carter Schonwald)
Date: Sun, 2 Oct 2016 08:10:53 -0400
Subject: Numeric read seems too strict
In-Reply-To:
References:
Message-ID:
Do we have benchmarks for your proposed change?
Does it handle hex and binary formats too ?
On Sunday, October 2, 2016, David Feuer wrote:
> Yeah, that. With a paren count and an accumulator and for fractional
> numbers some care around the decimal point or slash, we can look at each
> digit just once. Fast/lazy failure would be a pleasant side effect of
> running a numbers-only process from top to bottom. Yes, Read is supposed to
> read things that look like Haskell expressions, but it's really not a
> Haskell parser and pretending it is only hurts.
>
> On Oct 2, 2016 12:07 AM, "Ivan Lazar Miljenovic" <
> ivan.miljenovic at gmail.com
> > wrote:
>
>> On 2 October 2016 at 14:34, David Feuer > > wrote:
>> > Instead of scanning first (in lexing) to find the end of the number and
>> then
>> > scanning the string again to calculate the number, start to calculate
>> once
>> > the first digit appears.
>>
>> As in multiply the current sum by 10 before adding each new digit?
>>
>> >
>> >
>> > On Oct 1, 2016 10:07 PM, "wren romano" > > wrote:
>> >>
>> >> On Mon, Sep 12, 2016 at 11:03 AM, David Feuer > >
>> >> wrote:
>> >> > By the way, I believe we should be able to read numbers more
>> efficiently
>> >> > by
>> >> > parsing them directly instead of lexing first. We have to deal with
>> >> > parentheses, white space, and signs uniformly for all number types.
>> Then
>> >> > specialized foldl'-style code *should* be able to parse integral and
>> >> > fractional numbers faster than any lex-first scheme.
>> >>
>> >> I follow the part about parentheses and negations, but I'm not sure I
>> >> get the rest of what you mean. E.g., I'm not sure how any parser could
>> >> be faster than what bytestring-lexing does for Fractional and Integral
>> >> types (ignoring the unoptimized hex and octal functions). What am I
>> >> missing?
>> >>
>> >> --
>> >> Live well,
>> >> ~wren
>> >
>> >
>> > _______________________________________________
>> > Libraries mailing list
>> > Libraries at haskell.org
>>
>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>> >
>>
>>
>>
>> --
>> Ivan Lazar Miljenovic
>> Ivan.Miljenovic at gmail.com
>>
>> http://IvanMiljenovic.wordpress.com
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lemming at henning-thielemann.de Sun Oct 2 12:11:10 2016
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Sun, 2 Oct 2016 14:11:10 +0200 (CEST)
Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo
In-Reply-To:
References:
Message-ID:
On Sun, 2 Oct 2016, Carter Schonwald wrote:
> Maybe he was at icfp and has been catching up on rest and work in the intervening time. Wait :)
All mails came back as "timed out" or "refused". It is not that I just did
not wait long enough.
From mle+hs at mega-nerd.com Mon Oct 3 08:31:42 2016
From: mle+hs at mega-nerd.com (Erik de Castro Lopo)
Date: Mon, 3 Oct 2016 19:31:42 +1100
Subject: looking for haskell-llvm list maintainer Erik de Castro Lopo
In-Reply-To:
References:
Message-ID: <20161003193142.d7d337656974134fb4ca240f@mega-nerd.com>
> On Sunday, October 2, 2016, Henning Thielemann
< wrote:
>
> >
> > A bit off-topic:
> >
> > I tried to send an e-mail to
> > haskell-llvm at projects.haskell.org
> >
> > It was refused, although an e-mail to the list was accepted a month ago.
> >
> > I tried to reach the list maintainer at:
> > haskell-llvm-owner at projects.haskell.org
> > Erik de Castro Lopo
> > Erik de Castro Lopo
> >
> > No success. Any idea how to contact Erik or what is broken at
> > haskell-llvm at projects.haskell.org?
Carter Schonwald wrote:
> Maybe he was at icfp and has been catching up on rest and work in the
> intervening time. Wait :)
As Carter suggests, I was indeed at ICFP and then had a week's holday in
Japan. In addition, my mail server crashed about a week into a two week
trip and I had no way to restart or fix it until I got back home today
(which explains timeouts to my personal domain).
I have also tried emailing
and that does indeed seem broken.
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
From david.feuer at gmail.com Wed Oct 5 23:02:01 2016
From: david.feuer at gmail.com (David Feuer)
Date: Wed, 5 Oct 2016 19:02:01 -0400
Subject: Read for integral types: proposed semantic change
Message-ID:
I have undertaken[*] to improve the Read instances for a number of types in
base. My primary goal is to make things run faster, and my secondary goal
is to make things fail faster. The essence of my approach is to abandon the
two-stage lex/parse approach in favor of a single-phase parse-only one. The
most natural way to do this makes some parsers more lenient. With GHC's
current implementation, given
readsInt :: String -> [(Int, String)]
readsInt = reads
we get
readsInt "12e" = [(12, "e")]
readsInt "12e-" = [(12,"e-")]
readsInt "12e-3" = []
readsInt ('a' : undefined) = undefined
This is because the Read instance for Int calls a general lexer to produce
tokens it then interprets. For "12e-3", it reads a fractional token and
rejects this as an Int. For 'a': undefined, it attempts to find the
undefined end of the token before coming to the obvious conclusion that
it's not a number.
For reasons I explain in the ticket, this classical two-phase model is
inappropriate for Read--we get all its disadvantages and none of its
advantages. The natural improvement makes reading Int around seven times as
fast, but it changes the semantics a bit:
readsInt "12e" = [(12, "e")] --same
readsInt "12e-" = [(12,"e-")] --same
readsInt "12e-3" = [12,"e-3"] --more lenient
readsInt ('a' : undefined) = [] --lazier
As I understand it, GHC's current semantics are different from those of the
Haskell 98 reference implementation, and mine come closer to the standard.
That said, this would be a breaking change, so the CLC's input would be
very helpful.
The alternative would be to bend over backwards to approximate the current
semantics by looking past the end of an Int to see if it could look
fractional. I don't much care for the non-monotone nature of the current
semantics, so I don't think we should go to such lengths to preserve them.
[*] https://ghc.haskell.org/trac/ghc/ticket/12665
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ekmett at gmail.com Thu Oct 6 11:25:48 2016
From: ekmett at gmail.com (Edward Kmett)
Date: Thu, 6 Oct 2016 07:25:48 -0400
Subject: space leak in base or mtl
In-Reply-To:
References:
Message-ID:
At the least transformers should probably provide the manual overrides for
<* and *> for all of the monad transformer data types.
That should fix these cases.
-Edward
On Thu, Oct 6, 2016 at 5:08 AM, Zoran Bosnjak wrote:
> Dear base and mtl maintainers,
> I would like to report a memory leak problem (not sure which haskell
> component) when using "forever" in combination with "readerT" or "stateT".
> Simple test program to reproduce the problem:
> ---
> import Control.Concurrent
> import Control.Monad
> import Control.Monad.Trans
> import Control.Monad.Trans.Reader
> import Control.Monad.Trans.State
>
> main :: IO ()
> main = do
> -- no leak when using loop1 instead of loop2
> --let loop1 = (liftIO $ threadDelay 1) >> loop1
> let loop2 = forever (liftIO $ threadDelay 1)
>
> _ <- runStateT (runReaderT loop2 'a') 'b'
> return ()
> ---
>
> I have asked on haskell-cafe, but the analysis is above my haskell
> knowledge:
> https://mail.haskell.org/pipermail/haskell-cafe/2016-October/125176.html
> https://mail.haskell.org/pipermail/haskell-cafe/2016-October/125177.html
> https://mail.haskell.org/pipermail/haskell-cafe/2016-October/125178.html
>
> regards,
> Zoran
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lemming at henning-thielemann.de Sat Oct 8 08:17:07 2016
From: lemming at henning-thielemann.de (Henning Thielemann)
Date: Sat, 8 Oct 2016 10:17:07 +0200 (CEST)
Subject: Read for integral types: proposed semantic change
In-Reply-To:
References:
Message-ID:
On Wed, 5 Oct 2016, David Feuer wrote:
> readsInt "12e" = [(12, "e")] --same
> readsInt "12e-" = [(12,"e-")] --same
> readsInt "12e-3" = [12,"e-3"] --more lenient
> readsInt ('a' : undefined) = [] --lazier
Sounds reasonable. I do not think that I ever used these partial parses
intentionally.
From winterkoninkje at gmail.com Sun Oct 9 04:56:07 2016
From: winterkoninkje at gmail.com (wren romano)
Date: Sat, 8 Oct 2016 21:56:07 -0700
Subject: Numeric read seems too strict
In-Reply-To:
References:
Message-ID:
On Sat, Oct 1, 2016 at 8:34 PM, David Feuer wrote:
> Instead of scanning first (in lexing) to find the end of the number and then
> scanning the string again to calculate the number, start to calculate once
> the first digit appears.
Ah, yes. bytestring-lexing does that (among numerous other things). It
does save a second pass over the characters, but I'm not sure what
proportion of the total slowdown of typical parser combinators is
actually due to the second pass, as opposed to other problems with the
typical "how hard can it be" lexers/parsers people knock out. Given
the multitude of other problems (e.g., using Integer or other
expensive types throughout the computation, not forcing things often
enough to prevent thunks and stack depth, etc), I'm not sure it's
legit to call it a "parser vs lexer" issue.
--
Live well,
~wren
From winterkoninkje at gmail.com Sun Oct 9 05:08:46 2016
From: winterkoninkje at gmail.com (wren romano)
Date: Sat, 8 Oct 2016 22:08:46 -0700
Subject: Read for integral types: proposed semantic change
In-Reply-To:
References:
Message-ID: