From krz.gogolewski at gmail.com Sun Aug 2 16:09:51 2020 From: krz.gogolewski at gmail.com (Krzysztof Gogolewski) Date: Sun, 2 Aug 2020 18:09:51 +0200 Subject: Proposal: export more from Data.Kind Message-ID: Hello, I'd like to export TYPE, RuntimeRep(..), Multiplicity(..) from Data.Kind. (Multiplicity is the Linear Haskell type - One and Many.) Currently you have to import GHC.Exts / GHC.Types which conflicts with Safe Haskell. I think both levity and linearity polymorphism should be available under Safe Haskell. Data.Kind already contains Constraint and Type. -Krzysztof From chessai1996 at gmail.com Sun Aug 2 16:11:15 2020 From: chessai1996 at gmail.com (chessai) Date: Sun, 2 Aug 2020 09:11:15 -0700 Subject: Proposal: export more from Data.Kind In-Reply-To: References: Message-ID: +1 On Sun, Aug 2, 2020, 9:10 AM Krzysztof Gogolewski wrote: > Hello, > > I'd like to export TYPE, RuntimeRep(..), Multiplicity(..) from Data.Kind. > (Multiplicity is the Linear Haskell type - One and Many.) > > Currently you have to import GHC.Exts / GHC.Types which conflicts with > Safe Haskell. I think both levity and linearity polymorphism > should be available under Safe Haskell. > > Data.Kind already contains Constraint and Type. > > -Krzysztof > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sun Aug 2 16:12:14 2020 From: david.feuer at gmail.com (David Feuer) Date: Sun, 2 Aug 2020 12:12:14 -0400 Subject: Proposal: export more from Data.Kind In-Reply-To: References: Message-ID: My impression is that RuntimeRep may not have stabilized all the way yet. There are things that work for LiftedRep and UnliftedRep that want to be polymorphic that way. On Sun, Aug 2, 2020, 12:10 PM Krzysztof Gogolewski wrote: > Hello, > > I'd like to export TYPE, RuntimeRep(..), Multiplicity(..) from Data.Kind. > (Multiplicity is the Linear Haskell type - One and Many.) > > Currently you have to import GHC.Exts / GHC.Types which conflicts with > Safe Haskell. I think both levity and linearity polymorphism > should be available under Safe Haskell. > > Data.Kind already contains Constraint and Type. > > -Krzysztof > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chessai1996 at gmail.com Sun Aug 2 16:18:28 2020 From: chessai1996 at gmail.com (chessai) Date: Sun, 2 Aug 2020 09:18:28 -0700 Subject: Proposal: export more from Data.Kind In-Reply-To: References: Message-ID: I suppose we might want RuntimeRep to stabilise before its inclusion in a non-GHC.* namespace. Open to suggestions on whether that matters - typically people using RuntimeRep know what they're doing, and are small in number, so they could handle the breakage just fine. On Sun, Aug 2, 2020, 9:12 AM David Feuer wrote: > My impression is that RuntimeRep may not have stabilized all the way yet. > There are things that work for LiftedRep and UnliftedRep that want to be > polymorphic that way. > > On Sun, Aug 2, 2020, 12:10 PM Krzysztof Gogolewski < > krz.gogolewski at gmail.com> wrote: > >> Hello, >> >> I'd like to export TYPE, RuntimeRep(..), Multiplicity(..) from Data.Kind. >> (Multiplicity is the Linear Haskell type - One and Many.) >> >> Currently you have to import GHC.Exts / GHC.Types which conflicts with >> Safe Haskell. I think both levity and linearity polymorphism >> should be available under Safe Haskell. >> >> Data.Kind already contains Constraint and Type. >> >> -Krzysztof >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 3 10:49:20 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 3 Aug 2020 06:49:20 -0400 Subject: Proposal: export more from Data.Kind In-Reply-To: References: Message-ID: David makes a very good point here. That perhaps they should first stabilize a bit more before being exposed through there On Sun, Aug 2, 2020 at 12:12 PM David Feuer wrote: > My impression is that RuntimeRep may not have stabilized all the way yet. > There are things that work for LiftedRep and UnliftedRep that want to be > polymorphic that way. > > On Sun, Aug 2, 2020, 12:10 PM Krzysztof Gogolewski < > krz.gogolewski at gmail.com> wrote: > >> Hello, >> >> I'd like to export TYPE, RuntimeRep(..), Multiplicity(..) from Data.Kind. >> (Multiplicity is the Linear Haskell type - One and Many.) >> >> Currently you have to import GHC.Exts / GHC.Types which conflicts with >> Safe Haskell. I think both levity and linearity polymorphism >> should be available under Safe Haskell. >> >> Data.Kind already contains Constraint and Type. >> >> -Krzysztof >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Aug 3 12:29:06 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 3 Aug 2020 12:29:06 +0000 Subject: Proposal: export more from Data.Kind In-Reply-To: References: Message-ID: <010f0173b44c010e-d4c2096d-0a9b-445f-9331-048d9bcea21b-000000@us-east-2.amazonses.com> I'm sympathetic to this proposal, but I don't have a well considered opinion. Instead, I'll present a few facts: - TYPE, RuntimeRep, and Multiplicity should be allowed in Safe Haskell. You can't do anything untoward with these folks. - That said, Safe should not imply that linear-types guarantees are in force. At least, not yet. Linear types features cannot cause seg-faults (or other violations of safety expectations). - RuntimeRep has not yet stabilized. But it is a niche feature, and I wouldn't really expect stabilization here for some time. Maybe a nice compromise is introducing GHC.Kind that is -XSafe and exports all of these? Richard > On Aug 3, 2020, at 6:49 AM, Carter Schonwald wrote: > > David makes a very good point here. That perhaps they should first stabilize a bit more before being exposed through there > > On Sun, Aug 2, 2020 at 12:12 PM David Feuer > wrote: > My impression is that RuntimeRep may not have stabilized all the way yet. There are things that work for LiftedRep and UnliftedRep that want to be polymorphic that way. > > On Sun, Aug 2, 2020, 12:10 PM Krzysztof Gogolewski > wrote: > Hello, > > I'd like to export TYPE, RuntimeRep(..), Multiplicity(..) from Data.Kind. > (Multiplicity is the Linear Haskell type - One and Many.) > > Currently you have to import GHC.Exts / GHC.Types which conflicts with > Safe Haskell. I think both levity and linearity polymorphism > should be available under Safe Haskell. > > Data.Kind already contains Constraint and Type. > > -Krzysztof > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 3 13:24:30 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 3 Aug 2020 09:24:30 -0400 Subject: Proposal: export more from Data.Kind In-Reply-To: <010f0173b44c010e-d4c2096d-0a9b-445f-9331-048d9bcea21b-000000@us-east-2.amazonses.com> References: <010f0173b44c010e-d4c2096d-0a9b-445f-9331-048d9bcea21b-000000@us-east-2.amazonses.com> Message-ID: That sounds good. Totally all for it. And maybe have that be a one stop shop for kind level stuff like type lits etc? On Mon, Aug 3, 2020 at 8:29 AM Richard Eisenberg wrote: > I'm sympathetic to this proposal, but I don't have a well considered > opinion. Instead, I'll present a few facts: > > - TYPE, RuntimeRep, and Multiplicity should be allowed in Safe Haskell. > You can't do anything untoward with these folks. > > - That said, Safe should not imply that linear-types guarantees are in > force. At least, not yet. Linear types features cannot cause seg-faults (or > other violations of safety expectations). > > - RuntimeRep has not yet stabilized. But it is a niche feature, and I > wouldn't really expect stabilization here for some time. > > Maybe a nice compromise is introducing GHC.Kind that is -XSafe and exports > all of these? > > Richard > > On Aug 3, 2020, at 6:49 AM, Carter Schonwald > wrote: > > David makes a very good point here. That perhaps they should first > stabilize a bit more before being exposed through there > > On Sun, Aug 2, 2020 at 12:12 PM David Feuer wrote: > >> My impression is that RuntimeRep may not have stabilized all the way yet. >> There are things that work for LiftedRep and UnliftedRep that want to be >> polymorphic that way. >> >> On Sun, Aug 2, 2020, 12:10 PM Krzysztof Gogolewski < >> krz.gogolewski at gmail.com> wrote: >> >>> Hello, >>> >>> I'd like to export TYPE, RuntimeRep(..), Multiplicity(..) from Data.Kind. >>> (Multiplicity is the Linear Haskell type - One and Many.) >>> >>> Currently you have to import GHC.Exts / GHC.Types which conflicts with >>> Safe Haskell. I think both levity and linearity polymorphism >>> should be available under Safe Haskell. >>> >>> Data.Kind already contains Constraint and Type. >>> >>> -Krzysztof >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Sat Aug 8 03:08:29 2020 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Sat, 8 Aug 2020 05:08:29 +0200 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: Today I found another big bug caused by `fromIntegral`: https://github.com/haskell-crypto/cryptonite/issues/330 Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA hash collisions in my productions system. Restating what I said there: * Until we deprecate fromIntegral, Haskell code will always be subtly wrong and never be secure. * If we don't fix this, people will shy away from using Haskell for serious work (or learn it the hard way). Rust and C both do this better. * If the authors of key crypto libraries fall for these traps (no blame on them), who can get it right? We should remove the traps. The wrong code, hashInternalUpdate ctx d (fromIntegral $ B.length b) exists because it simply does not look like wrong code. In contrast, hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) does look like wrong code and would make anyone scrolling by suspicious. We can look away while continuing to claim that Haskell is a high-correctness language, or fix stuff like this and make it one. From hvriedel at gmail.com Sat Aug 8 06:19:59 2020 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 8 Aug 2020 08:19:59 +0200 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: Btw, subtle correctness issues hiding throughout the vast monolithic codebase like these were part of the reason (other reasons are explained in https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/) why cryptohash/cryptonite/foundation has been banned from our codebases not the least because it's hopeless to get something like cryptonite through formal certification for security critical applications. At the time the author made it clear he didn't welcome my bug reports (later I learned the author was merely peculiar as to what they consider an actual bug worth reporting). And whenever I mentioned this in the past on reddit as a PSA, I got defensive reactions and disbelief from certain people who were already invested in the cryptonite ecosystem and I was accused of spreading unfounded FUD... and so I stopped bringing up this kind of "heresy" publicly. However, as you can see in https://hackage.haskell.org/package/cryptohash-sha256/changelog this particular 32-bit overflow issue was one of the critical bugfixes I ran into and repaired in the very first release right after the initial-fork-release back in 2016. It's astonishing it took over 4 years for this bug to keep lingering in cryptonite/cryptohash before it was detected. You'd expect this to have been detected in code audits performed by Haskell companies that actively promote the use of cryptonite early on. On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries wrote: > > Today I found another big bug caused by `fromIntegral`: > > https://github.com/haskell-crypto/cryptonite/issues/330 > > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA hash collisions in my productions system. > > Restating what I said there: > > * Until we deprecate fromIntegral, Haskell code will always be subtly wrong and never be secure. > * If we don't fix this, people will shy away from using Haskell for serious work (or learn it the hard way). Rust and C both do this better. > * If the authors of key crypto libraries fall for these traps (no blame on them), who can get it right? We should remove the traps. > > The wrong code, > > hashInternalUpdate ctx d (fromIntegral $ B.length b) > > exists because it simply does not look like wrong code. In contrast, > > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) > > does look like wrong code and would make anyone scrolling by suspicious. > > We can look away while continuing to claim that Haskell is a high-correctness language, or fix stuff like this and make it one. > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From vamchale at gmail.com Sat Aug 8 13:44:59 2020 From: vamchale at gmail.com (Vanessa McHale) Date: Sat, 8 Aug 2020 08:44:59 -0500 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> -1 from me, massive work to overhaul the ecosystem. Maybe a haddock comment first? > On Aug 7, 2020, at 10:08 PM, Niklas Hambüchen via Libraries wrote: > > Today I found another big bug caused by `fromIntegral`: > > https://github.com/haskell-crypto/cryptonite/issues/330 > > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA hash collisions in my productions system. > > Restating what I said there: > > * Until we deprecate fromIntegral, Haskell code will always be subtly wrong and never be secure. > * If we don't fix this, people will shy away from using Haskell for serious work (or learn it the hard way). Rust and C both do this better. > * If the authors of key crypto libraries fall for these traps (no blame on them), who can get it right? We should remove the traps. > > The wrong code, > > hashInternalUpdate ctx d (fromIntegral $ B.length b) > > exists because it simply does not look like wrong code. In contrast, > > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) > > does look like wrong code and would make anyone scrolling by suspicious. > > We can look away while continuing to claim that Haskell is a high-correctness language, or fix stuff like this and make it one. > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From eacameron at gmail.com Sat Aug 8 14:09:08 2020 From: eacameron at gmail.com (Elliot Cameron) Date: Sat, 8 Aug 2020 10:09:08 -0400 Subject: Deprecating fromIntegral In-Reply-To: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> References: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> Message-ID: I agree the situation now is actually quite dire. It would be much, much better to be partial than to corrupt data. Sadly even something as ubiquitous as + and - suffer from a similar problem. The issue is in many ways polymorphism. These functions are far less polymorphic than their signatures. What if we introduced partial versions of these functions and deprecated the rest. Then if someone wants to guarantee there are no exceptions they need to resort to more advanced tactics but at the very least we don't corrupt. On Sat, Aug 8, 2020, 9:45 AM Vanessa McHale wrote: > -1 from me, massive work to overhaul the ecosystem. > > Maybe a haddock comment first? > > > On Aug 7, 2020, at 10:08 PM, Niklas Hambüchen via Libraries < > libraries at haskell.org> wrote: > > > > Today I found another big bug caused by `fromIntegral`: > > > > https://github.com/haskell-crypto/cryptonite/issues/330 > > > > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA hash > collisions in my productions system. > > > > Restating what I said there: > > > > * Until we deprecate fromIntegral, Haskell code will always be subtly > wrong and never be secure. > > * If we don't fix this, people will shy away from using Haskell for > serious work (or learn it the hard way). Rust and C both do this better. > > * If the authors of key crypto libraries fall for these traps (no blame > on them), who can get it right? We should remove the traps. > > > > The wrong code, > > > > hashInternalUpdate ctx d (fromIntegral $ B.length b) > > > > exists because it simply does not look like wrong code. In contrast, > > > > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) > > > > does look like wrong code and would make anyone scrolling by suspicious. > > > > We can look away while continuing to claim that Haskell is a > high-correctness language, or fix stuff like this and make it one. > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sat Aug 8 14:51:25 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sat, 8 Aug 2020 16:51:25 +0200 (CEST) Subject: Deprecating fromIntegral In-Reply-To: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> References: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> Message-ID: On Sat, 8 Aug 2020, Vanessa McHale wrote: > -1 from me, massive work to overhaul the ecosystem. > > Maybe a haddock comment first? Maybe a separate library to solve these numeric problems the correct way and then ban every use of Prelude or 'base' in security related packages and then move on to other packages? From carter.schonwald at gmail.com Mon Aug 10 03:43:27 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 9 Aug 2020 23:43:27 -0400 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: The real issue here isn’t fromintrgral, but that everything defaults to wrapping semantics Adding variants that do signaling/exceptions and clipping Variants Of finite word/Int data types rather than just wrapping is the right path forward. On Sat, Aug 8, 2020 at 2:20 AM Herbert Valerio Riedel wrote: > Btw, subtle correctness issues hiding throughout the vast monolithic > codebase like these were part of the reason (other reasons are > explained in > https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/ > ) > why cryptohash/cryptonite/foundation has been banned from our > codebases not the least because it's hopeless to get something like > cryptonite through formal certification for security critical > applications. At the time the author made it clear he didn't welcome > my bug reports (later I learned the author was merely peculiar as to > what they consider an actual bug worth reporting). And whenever I > mentioned this in the past on reddit as a PSA, I got defensive > reactions and disbelief from certain people who were already invested > in the cryptonite ecosystem and I was accused of spreading unfounded > FUD... and so I stopped bringing up this kind of "heresy" publicly. > > However, as you can see in > > https://hackage.haskell.org/package/cryptohash-sha256/changelog > > this particular 32-bit overflow issue was one of the critical bugfixes > I ran into and repaired in the very first release right after the > initial-fork-release back in 2016. It's astonishing it took over 4 > years for this bug to keep lingering in cryptonite/cryptohash before > it was detected. You'd expect this to have been detected in code > audits performed by Haskell companies that actively promote the use of > cryptonite early on. > > On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries > wrote: > > > > Today I found another big bug caused by `fromIntegral`: > > > > https://github.com/haskell-crypto/cryptonite/issues/330 > > > > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA hash > collisions in my productions system. > > > > Restating what I said there: > > > > * Until we deprecate fromIntegral, Haskell code will always be subtly > wrong and never be secure. > > * If we don't fix this, people will shy away from using Haskell for > serious work (or learn it the hard way). Rust and C both do this better. > > * If the authors of key crypto libraries fall for these traps (no blame > on them), who can get it right? We should remove the traps. > > > > The wrong code, > > > > hashInternalUpdate ctx d (fromIntegral $ B.length b) > > > > exists because it simply does not look like wrong code. In contrast, > > > > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) > > > > does look like wrong code and would make anyone scrolling by suspicious. > > > > We can look away while continuing to claim that Haskell is a > high-correctness language, or fix stuff like this and make it one. > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 10 04:22:50 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 10 Aug 2020 00:22:50 -0400 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: I mean Saturating not clipping On Sun, Aug 9, 2020 at 11:43 PM Carter Schonwald wrote: > The real issue here isn’t fromintrgral, but that everything defaults to > wrapping semantics > > Adding variants that do signaling/exceptions and clipping Variants Of > finite word/Int data types rather than just wrapping is the right path > forward. > > > > On Sat, Aug 8, 2020 at 2:20 AM Herbert Valerio Riedel > wrote: > >> Btw, subtle correctness issues hiding throughout the vast monolithic >> codebase like these were part of the reason (other reasons are >> explained in >> https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/ >> ) >> why cryptohash/cryptonite/foundation has been banned from our >> codebases not the least because it's hopeless to get something like >> cryptonite through formal certification for security critical >> applications. At the time the author made it clear he didn't welcome >> my bug reports (later I learned the author was merely peculiar as to >> what they consider an actual bug worth reporting). And whenever I >> mentioned this in the past on reddit as a PSA, I got defensive >> reactions and disbelief from certain people who were already invested >> in the cryptonite ecosystem and I was accused of spreading unfounded >> FUD... and so I stopped bringing up this kind of "heresy" publicly. >> >> However, as you can see in >> >> https://hackage.haskell.org/package/cryptohash-sha256/changelog >> >> this particular 32-bit overflow issue was one of the critical bugfixes >> I ran into and repaired in the very first release right after the >> initial-fork-release back in 2016. It's astonishing it took over 4 >> years for this bug to keep lingering in cryptonite/cryptohash before >> it was detected. You'd expect this to have been detected in code >> audits performed by Haskell companies that actively promote the use of >> cryptonite early on. >> >> On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries >> wrote: >> > >> > Today I found another big bug caused by `fromIntegral`: >> > >> > https://github.com/haskell-crypto/cryptonite/issues/330 >> > >> > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA >> hash collisions in my productions system. >> > >> > Restating what I said there: >> > >> > * Until we deprecate fromIntegral, Haskell code will always be subtly >> wrong and never be secure. >> > * If we don't fix this, people will shy away from using Haskell for >> serious work (or learn it the hard way). Rust and C both do this better. >> > * If the authors of key crypto libraries fall for these traps (no blame >> on them), who can get it right? We should remove the traps. >> > >> > The wrong code, >> > >> > hashInternalUpdate ctx d (fromIntegral $ B.length b) >> > >> > exists because it simply does not look like wrong code. In contrast, >> > >> > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) >> > >> > does look like wrong code and would make anyone scrolling by suspicious. >> > >> > We can look away while continuing to claim that Haskell is a >> high-correctness language, or fix stuff like this and make it one. >> > _______________________________________________ >> > Libraries mailing list >> > Libraries at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Aug 10 05:20:40 2020 From: david.feuer at gmail.com (David Feuer) Date: Mon, 10 Aug 2020 01:20:40 -0400 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: There isn't one problem. Different situations require different things. Wrapping makes fromInteger a ring homomorphism, which makes a lot of basic arithmetic intuition "just work". But fromIntegral is a strange and lawless beast, as well as a nasty potential optimization pitfall. On Sun, Aug 9, 2020, 11:44 PM Carter Schonwald wrote: > The real issue here isn’t fromintrgral, but that everything defaults to > wrapping semantics > > Adding variants that do signaling/exceptions and clipping Variants Of > finite word/Int data types rather than just wrapping is the right path > forward. > > > > On Sat, Aug 8, 2020 at 2:20 AM Herbert Valerio Riedel > wrote: > >> Btw, subtle correctness issues hiding throughout the vast monolithic >> codebase like these were part of the reason (other reasons are >> explained in >> https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/ >> ) >> why cryptohash/cryptonite/foundation has been banned from our >> codebases not the least because it's hopeless to get something like >> cryptonite through formal certification for security critical >> applications. At the time the author made it clear he didn't welcome >> my bug reports (later I learned the author was merely peculiar as to >> what they consider an actual bug worth reporting). And whenever I >> mentioned this in the past on reddit as a PSA, I got defensive >> reactions and disbelief from certain people who were already invested >> in the cryptonite ecosystem and I was accused of spreading unfounded >> FUD... and so I stopped bringing up this kind of "heresy" publicly. >> >> However, as you can see in >> >> https://hackage.haskell.org/package/cryptohash-sha256/changelog >> >> this particular 32-bit overflow issue was one of the critical bugfixes >> I ran into and repaired in the very first release right after the >> initial-fork-release back in 2016. It's astonishing it took over 4 >> years for this bug to keep lingering in cryptonite/cryptohash before >> it was detected. You'd expect this to have been detected in code >> audits performed by Haskell companies that actively promote the use of >> cryptonite early on. >> >> On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries >> wrote: >> > >> > Today I found another big bug caused by `fromIntegral`: >> > >> > https://github.com/haskell-crypto/cryptonite/issues/330 >> > >> > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA >> hash collisions in my productions system. >> > >> > Restating what I said there: >> > >> > * Until we deprecate fromIntegral, Haskell code will always be subtly >> wrong and never be secure. >> > * If we don't fix this, people will shy away from using Haskell for >> serious work (or learn it the hard way). Rust and C both do this better. >> > * If the authors of key crypto libraries fall for these traps (no blame >> on them), who can get it right? We should remove the traps. >> > >> > The wrong code, >> > >> > hashInternalUpdate ctx d (fromIntegral $ B.length b) >> > >> > exists because it simply does not look like wrong code. In contrast, >> > >> > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) >> > >> > does look like wrong code and would make anyone scrolling by suspicious. >> > >> > We can look away while continuing to claim that Haskell is a >> high-correctness language, or fix stuff like this and make it one. >> > _______________________________________________ >> > Libraries mailing list >> > Libraries at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Mon Aug 10 07:13:47 2020 From: spam at scientician.net (Bardur Arantsson) Date: Mon, 10 Aug 2020 09:13:47 +0200 Subject: Deprecating fromIntegral In-Reply-To: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> References: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> Message-ID: On 08/08/2020 15.44, Vanessa McHale wrote: > -1 from me, massive work to overhaul the ecosystem. > Why would a massive overhaul be necessary for deprecation? If that's the case then there's a deeper more serious underlying issue around deprecation, IMO. Regards, From svenpanne at gmail.com Mon Aug 10 08:34:11 2020 From: svenpanne at gmail.com (Sven Panne) Date: Mon, 10 Aug 2020 10:34:11 +0200 Subject: Deprecating fromIntegral In-Reply-To: References: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> Message-ID: Am Mo., 10. Aug. 2020 um 09:15 Uhr schrieb Bardur Arantsson < spam at scientician.net>: > On 08/08/2020 15.44, Vanessa McHale wrote: > > -1 from me, massive work to overhaul the ecosystem. > > Why would a massive overhaul be necessary for deprecation? If that's the > case then there's a deeper more serious underlying issue around > deprecation, IMO. > Two things come to my mind here: * You'll probably break quite a few projects which use -Werror. I know that there are different opinions regarding -Werror in general, but in any case there *will* be breakage. * If you consider books and tutorials a part of the ecosystem (which I definitely do), there is even more "breakage": From the top of my head I would say that quite a few of them use fromIntegral, so deprecation will cause confusion. All these things are definitely fixable, but neither quickly nor without a negligible cost. Deprecations should not be done lightly. Regarding the deprecation itself: I fail to see why fromIntegral is worse than (+), (-), (*), ..., and nobody is proposing to remove these. The real problem is using fixed-sized numbers where they shouldn't be used, so a -1 from me. Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Aug 10 08:44:18 2020 From: david.feuer at gmail.com (David Feuer) Date: Mon, 10 Aug 2020 04:44:18 -0400 Subject: Deprecating fromIntegral In-Reply-To: References: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> Message-ID: I wonder if this is something HLint could help with. I imagine many bugs could be avoided if every use of fromIntegral used visible type application to indicate explicitly what type was being converted to what type. On Mon, Aug 10, 2020, 4:34 AM Sven Panne wrote: > Am Mo., 10. Aug. 2020 um 09:15 Uhr schrieb Bardur Arantsson < > spam at scientician.net>: > >> On 08/08/2020 15.44, Vanessa McHale wrote: >> > -1 from me, massive work to overhaul the ecosystem. >> >> Why would a massive overhaul be necessary for deprecation? If that's the >> case then there's a deeper more serious underlying issue around >> deprecation, IMO. >> > > Two things come to my mind here: > > * You'll probably break quite a few projects which use -Werror. I know > that there are different opinions regarding -Werror in general, but in any > case there *will* be breakage. > > * If you consider books and tutorials a part of the ecosystem (which I > definitely do), there is even more "breakage": From the top of my head I > would say that quite a few of them use fromIntegral, so deprecation will > cause confusion. > > All these things are definitely fixable, but neither quickly nor without a > negligible cost. Deprecations should not be done lightly. > > Regarding the deprecation itself: I fail to see why fromIntegral is worse > than (+), (-), (*), ..., and nobody is proposing to remove these. The real > problem is using fixed-sized numbers where they shouldn't be used, so a -1 > from me. > > Cheers, > S. > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sakumatti.luukkonen at gmail.com Mon Aug 10 09:06:16 2020 From: sakumatti.luukkonen at gmail.com (Sakumatti Luukkonen) Date: Mon, 10 Aug 2020 12:06:16 +0300 Subject: Deprecating fromIntegral In-Reply-To: References: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> Message-ID: I imagine that STAN (https://github.com/kowainik/stan) should be able to detect dangerous uses of fromIntegral even without visible type applications. But perhaps GHC could simply issue a warning in such cases? ma 10.8.2020 klo 11.45 David Feuer kirjoitti: > I wonder if this is something HLint could help with. I imagine many bugs > could be avoided if every use of fromIntegral used visible type application > to indicate explicitly what type was being converted to what type. > > On Mon, Aug 10, 2020, 4:34 AM Sven Panne wrote: > >> Am Mo., 10. Aug. 2020 um 09:15 Uhr schrieb Bardur Arantsson < >> spam at scientician.net>: >> >>> On 08/08/2020 15.44, Vanessa McHale wrote: >>> > -1 from me, massive work to overhaul the ecosystem. >>> >>> Why would a massive overhaul be necessary for deprecation? If that's the >>> case then there's a deeper more serious underlying issue around >>> deprecation, IMO. >>> >> >> Two things come to my mind here: >> >> * You'll probably break quite a few projects which use -Werror. I know >> that there are different opinions regarding -Werror in general, but in any >> case there *will* be breakage. >> >> * If you consider books and tutorials a part of the ecosystem (which I >> definitely do), there is even more "breakage": From the top of my head I >> would say that quite a few of them use fromIntegral, so deprecation will >> cause confusion. >> >> All these things are definitely fixable, but neither quickly nor without >> a negligible cost. Deprecations should not be done lightly. >> >> Regarding the deprecation itself: I fail to see why fromIntegral is worse >> than (+), (-), (*), ..., and nobody is proposing to remove these. The real >> problem is using fixed-sized numbers where they shouldn't be used, so a -1 >> from me. >> >> Cheers, >> S. >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -- Sakumatti Luukkonen -------------- next part -------------- An HTML attachment was scrubbed... URL: From vamchale at gmail.com Mon Aug 10 11:10:21 2020 From: vamchale at gmail.com (Vanessa McHale) Date: Mon, 10 Aug 2020 06:10:21 -0500 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: <8EA71896-BF9B-4AE6-AF88-A2C9FD6EBA5F@gmail.com> I agree! Silently changing semantics under one’s feet may be bad, but one needn’t remove the function itself. That’s just work for library maintainers! Cheers, Vanessa McHale > On Aug 9, 2020, at 10:43 PM, Carter Schonwald wrote: > > The real issue here isn’t fromintrgral, but that everything defaults to wrapping semantics > > Adding variants that do signaling/exceptions and clipping Variants Of finite word/Int data types rather than just wrapping is the right path forward. > > > > On Sat, Aug 8, 2020 at 2:20 AM Herbert Valerio Riedel > wrote: > Btw, subtle correctness issues hiding throughout the vast monolithic > codebase like these were part of the reason (other reasons are > explained in https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/ ) > why cryptohash/cryptonite/foundation has been banned from our > codebases not the least because it's hopeless to get something like > cryptonite through formal certification for security critical > applications. At the time the author made it clear he didn't welcome > my bug reports (later I learned the author was merely peculiar as to > what they consider an actual bug worth reporting). And whenever I > mentioned this in the past on reddit as a PSA, I got defensive > reactions and disbelief from certain people who were already invested > in the cryptonite ecosystem and I was accused of spreading unfounded > FUD... and so I stopped bringing up this kind of "heresy" publicly. > > However, as you can see in > > https://hackage.haskell.org/package/cryptohash-sha256/changelog > > this particular 32-bit overflow issue was one of the critical bugfixes > I ran into and repaired in the very first release right after the > initial-fork-release back in 2016. It's astonishing it took over 4 > years for this bug to keep lingering in cryptonite/cryptohash before > it was detected. You'd expect this to have been detected in code > audits performed by Haskell companies that actively promote the use of > cryptonite early on. > > On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries > > wrote: > > > > Today I found another big bug caused by `fromIntegral`: > > > > https://github.com/haskell-crypto/cryptonite/issues/330 > > > > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA hash collisions in my productions system. > > > > Restating what I said there: > > > > * Until we deprecate fromIntegral, Haskell code will always be subtly wrong and never be secure. > > * If we don't fix this, people will shy away from using Haskell for serious work (or learn it the hard way). Rust and C both do this better. > > * If the authors of key crypto libraries fall for these traps (no blame on them), who can get it right? We should remove the traps. > > > > The wrong code, > > > > hashInternalUpdate ctx d (fromIntegral $ B.length b) > > > > exists because it simply does not look like wrong code. In contrast, > > > > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) > > > > does look like wrong code and would make anyone scrolling by suspicious. > > > > We can look away while continuing to claim that Haskell is a high-correctness language, or fix stuff like this and make it one. > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 10 13:49:20 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 10 Aug 2020 09:49:20 -0400 Subject: Deprecating fromIntegral In-Reply-To: <8EA71896-BF9B-4AE6-AF88-A2C9FD6EBA5F@gmail.com> References: <8EA71896-BF9B-4AE6-AF88-A2C9FD6EBA5F@gmail.com> Message-ID: Agreed! And 1-2 maintainers egregious bugs isn’t evidence enough for baby and bath water. As david and others say, there’s a lot of different semantics for mapping integers to finite words and groups etc, and the issue here is ultimately a bad choice in target. Int32 is never a portable size for byte arrays. In c or Haskell. On Mon, Aug 10, 2020 at 7:10 AM Vanessa McHale wrote: > I agree! Silently changing semantics under one’s feet may be bad, but one > needn’t remove the function itself. That’s just work for library > maintainers! > > Cheers, > Vanessa McHale > > > On Aug 9, 2020, at 10:43 PM, Carter Schonwald > wrote: > > The real issue here isn’t fromintrgral, but that everything defaults to > wrapping semantics > > Adding variants that do signaling/exceptions and clipping Variants Of > finite word/Int data types rather than just wrapping is the right path > forward. > > > > On Sat, Aug 8, 2020 at 2:20 AM Herbert Valerio Riedel > wrote: > >> Btw, subtle correctness issues hiding throughout the vast monolithic >> codebase like these were part of the reason (other reasons are >> explained in >> https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/ >> ) >> why cryptohash/cryptonite/foundation has been banned from our >> codebases not the least because it's hopeless to get something like >> cryptonite through formal certification for security critical >> applications. At the time the author made it clear he didn't welcome >> my bug reports (later I learned the author was merely peculiar as to >> what they consider an actual bug worth reporting). And whenever I >> mentioned this in the past on reddit as a PSA, I got defensive >> reactions and disbelief from certain people who were already invested >> in the cryptonite ecosystem and I was accused of spreading unfounded >> FUD... and so I stopped bringing up this kind of "heresy" publicly. >> >> However, as you can see in >> >> https://hackage.haskell.org/package/cryptohash-sha256/changelog >> >> this particular 32-bit overflow issue was one of the critical bugfixes >> I ran into and repaired in the very first release right after the >> initial-fork-release back in 2016. It's astonishing it took over 4 >> years for this bug to keep lingering in cryptonite/cryptohash before >> it was detected. You'd expect this to have been detected in code >> audits performed by Haskell companies that actively promote the use of >> cryptonite early on. >> >> On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries >> wrote: >> > >> > Today I found another big bug caused by `fromIntegral`: >> > >> > https://github.com/haskell-crypto/cryptonite/issues/330 >> > >> > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA >> hash collisions in my productions system. >> > >> > Restating what I said there: >> > >> > * Until we deprecate fromIntegral, Haskell code will always be subtly >> wrong and never be secure. >> > * If we don't fix this, people will shy away from using Haskell for >> serious work (or learn it the hard way). Rust and C both do this better. >> > * If the authors of key crypto libraries fall for these traps (no blame >> on them), who can get it right? We should remove the traps. >> > >> > The wrong code, >> > >> > hashInternalUpdate ctx d (fromIntegral $ B.length b) >> > >> > exists because it simply does not look like wrong code. In contrast, >> > >> > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) >> > >> > does look like wrong code and would make anyone scrolling by suspicious. >> > >> > We can look away while continuing to claim that Haskell is a >> high-correctness language, or fix stuff like this and make it one. >> > _______________________________________________ >> > Libraries mailing list >> > Libraries at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emertens at gmail.com Mon Aug 10 16:11:34 2020 From: emertens at gmail.com (Eric Mertens) Date: Mon, 10 Aug 2020 09:11:34 -0700 Subject: Deprecating fromIntegral In-Reply-To: References: <8EA71896-BF9B-4AE6-AF88-A2C9FD6EBA5F@gmail.com> Message-ID: <659980BB-01FB-4360-976C-C84F88433C0C@gmail.com> Step one is to build the new preferred API and then step two is to worry about moving the whole ecosystem over. I’d love to have some safer numeric conversion functions around. Is there a good starting point already or do we need to build something new? > On Aug 10, 2020, at 6:49 AM, Carter Schonwald wrote: > > Agreed! > > And 1-2 maintainers egregious bugs isn’t evidence enough for baby and bath water. > > As david and others say, there’s a lot of different semantics for mapping integers to finite words and groups etc, and the issue here is ultimately a bad choice in target. Int32 is never a portable size for byte arrays. In c or Haskell. > > On Mon, Aug 10, 2020 at 7:10 AM Vanessa McHale > wrote: > I agree! Silently changing semantics under one’s feet may be bad, but one needn’t remove the function itself. That’s just work for library maintainers! > > Cheers, > Vanessa McHale > > >> On Aug 9, 2020, at 10:43 PM, Carter Schonwald > wrote: >> >> The real issue here isn’t fromintrgral, but that everything defaults to wrapping semantics >> >> Adding variants that do signaling/exceptions and clipping Variants Of finite word/Int data types rather than just wrapping is the right path forward. >> >> >> >> On Sat, Aug 8, 2020 at 2:20 AM Herbert Valerio Riedel > wrote: >> Btw, subtle correctness issues hiding throughout the vast monolithic >> codebase like these were part of the reason (other reasons are >> explained in https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/ ) >> why cryptohash/cryptonite/foundation has been banned from our >> codebases not the least because it's hopeless to get something like >> cryptonite through formal certification for security critical >> applications. At the time the author made it clear he didn't welcome >> my bug reports (later I learned the author was merely peculiar as to >> what they consider an actual bug worth reporting). And whenever I >> mentioned this in the past on reddit as a PSA, I got defensive >> reactions and disbelief from certain people who were already invested >> in the cryptonite ecosystem and I was accused of spreading unfounded >> FUD... and so I stopped bringing up this kind of "heresy" publicly. >> >> However, as you can see in >> >> https://hackage.haskell.org/package/cryptohash-sha256/changelog >> >> this particular 32-bit overflow issue was one of the critical bugfixes >> I ran into and repaired in the very first release right after the >> initial-fork-release back in 2016. It's astonishing it took over 4 >> years for this bug to keep lingering in cryptonite/cryptohash before >> it was detected. You'd expect this to have been detected in code >> audits performed by Haskell companies that actively promote the use of >> cryptonite early on. >> >> On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries >> > wrote: >> > >> > Today I found another big bug caused by `fromIntegral`: >> > >> > https://github.com/haskell-crypto/cryptonite/issues/330 >> > >> > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA hash collisions in my productions system. >> > >> > Restating what I said there: >> > >> > * Until we deprecate fromIntegral, Haskell code will always be subtly wrong and never be secure. >> > * If we don't fix this, people will shy away from using Haskell for serious work (or learn it the hard way). Rust and C both do this better. >> > * If the authors of key crypto libraries fall for these traps (no blame on them), who can get it right? We should remove the traps. >> > >> > The wrong code, >> > >> > hashInternalUpdate ctx d (fromIntegral $ B.length b) >> > >> > exists because it simply does not look like wrong code. In contrast, >> > >> > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) >> > >> > does look like wrong code and would make anyone scrolling by suspicious. >> > >> > We can look away while continuing to claim that Haskell is a high-correctness language, or fix stuff like this and make it one. >> > _______________________________________________ >> > Libraries mailing list >> > Libraries at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 10 17:33:51 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 10 Aug 2020 13:33:51 -0400 Subject: Deprecating fromIntegral In-Reply-To: <659980BB-01FB-4360-976C-C84F88433C0C@gmail.com> References: <8EA71896-BF9B-4AE6-AF88-A2C9FD6EBA5F@gmail.com> <659980BB-01FB-4360-976C-C84F88433C0C@gmail.com> Message-ID: well said. On Mon, Aug 10, 2020 at 12:12 PM Eric Mertens wrote: > Step one is to build the new preferred API and then step two is to worry > about moving the whole ecosystem over. I’d love to have some safer numeric > conversion functions around. Is there a good starting point already or do > we need to build something new? > > On Aug 10, 2020, at 6:49 AM, Carter Schonwald > wrote: > > Agreed! > > And 1-2 maintainers egregious bugs isn’t evidence enough for baby and bath > water. > > As david and others say, there’s a lot of different semantics for mapping > integers to finite words and groups etc, and the issue here is ultimately a > bad choice in target. Int32 is never a portable size for byte arrays. In c > or Haskell. > > On Mon, Aug 10, 2020 at 7:10 AM Vanessa McHale wrote: > >> I agree! Silently changing semantics under one’s feet may be bad, but one >> needn’t remove the function itself. That’s just work for library >> maintainers! >> >> Cheers, >> Vanessa McHale >> >> >> On Aug 9, 2020, at 10:43 PM, Carter Schonwald >> wrote: >> >> The real issue here isn’t fromintrgral, but that everything defaults to >> wrapping semantics >> >> Adding variants that do signaling/exceptions and clipping Variants Of >> finite word/Int data types rather than just wrapping is the right path >> forward. >> >> >> >> On Sat, Aug 8, 2020 at 2:20 AM Herbert Valerio Riedel >> wrote: >> >>> Btw, subtle correctness issues hiding throughout the vast monolithic >>> codebase like these were part of the reason (other reasons are >>> explained in >>> https://old.reddit.com/r/haskell/comments/5lxv75/psa_please_use_unique_module_names_when_uploading/dbzegx3/ >>> ) >>> why cryptohash/cryptonite/foundation has been banned from our >>> codebases not the least because it's hopeless to get something like >>> cryptonite through formal certification for security critical >>> applications. At the time the author made it clear he didn't welcome >>> my bug reports (later I learned the author was merely peculiar as to >>> what they consider an actual bug worth reporting). And whenever I >>> mentioned this in the past on reddit as a PSA, I got defensive >>> reactions and disbelief from certain people who were already invested >>> in the cryptonite ecosystem and I was accused of spreading unfounded >>> FUD... and so I stopped bringing up this kind of "heresy" publicly. >>> >>> However, as you can see in >>> >>> https://hackage.haskell.org/package/cryptohash-sha256/changelog >>> >>> this particular 32-bit overflow issue was one of the critical bugfixes >>> I ran into and repaired in the very first release right after the >>> initial-fork-release back in 2016. It's astonishing it took over 4 >>> years for this bug to keep lingering in cryptonite/cryptohash before >>> it was detected. You'd expect this to have been detected in code >>> audits performed by Haskell companies that actively promote the use of >>> cryptonite early on. >>> >>> On Sat, Aug 8, 2020 at 5:09 AM Niklas Hambüchen via Libraries >>> wrote: >>> > >>> > Today I found another big bug caused by `fromIntegral`: >>> > >>> > https://github.com/haskell-crypto/cryptonite/issues/330 >>> > >>> > Incorrect hashes for all hash algorithms beyond 4 GiB of input. SHA >>> hash collisions in my productions system. >>> > >>> > Restating what I said there: >>> > >>> > * Until we deprecate fromIntegral, Haskell code will always be subtly >>> wrong and never be secure. >>> > * If we don't fix this, people will shy away from using Haskell for >>> serious work (or learn it the hard way). Rust and C both do this better. >>> > * If the authors of key crypto libraries fall for these traps (no >>> blame on them), who can get it right? We should remove the traps. >>> > >>> > The wrong code, >>> > >>> > hashInternalUpdate ctx d (fromIntegral $ B.length b) >>> > >>> > exists because it simply does not look like wrong code. In contrast, >>> > >>> > hashInternalUpdate ctx d (fromIntegralWrapping $ B.length b) >>> > >>> > does look like wrong code and would make anyone scrolling by >>> suspicious. >>> > >>> > We can look away while continuing to claim that Haskell is a >>> high-correctness language, or fix stuff like this and make it one. >>> > _______________________________________________ >>> > Libraries mailing list >>> > Libraries at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> >> _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From falsifian at falsifian.org Mon Aug 10 19:38:45 2020 From: falsifian at falsifian.org (James Cook) Date: Mon, 10 Aug 2020 19:38:45 +0000 Subject: Cabal: N executables without building all the modules N times? Message-ID: Hi libraries@, (Let me know if there's a better place for Cabal questions. This list is linked from https://www.haskell.org/cabal/ as a place to ask questions.) I have a project with lots of modules, and lots of executables that use those modules. How should I structure my .cabal file if I want cabal to only build the common modules once? I have found I can do it as follows, but I am wondering if there's a way to do it without step 1: 1. Create a "library_src" directory with the source for all the shared modules. Put the source files for the executables somewhere else. 2. Create a .cabal file with one "library" stanza with "hs-source-dirs: library_source", and several executable stanzas. If I leave out step 1 and put all the source files in the same place, then the compiler builds them separately for each executable, and I get a "missing-home-modules" warning. Is it just a fact of life that I need to separate my source into separate directories if I want this to work properly? Here's a small example, with four files, where I left out step 1 (i.e. put everything in the same directory). It does not behave the way I want it to: it builds the Library module three times if I run "cabal v2-build" and "cabal v2-test", and it shows the "missing-home-modules" warning. It also still builds if I remove "mhm" from build-depends of the executable and test-suite; I'd rather it failed to build if I did that. mhm.cabal: cabal-version: 2.2 name: mhm version: 0 library exposed-modules: Library default-language: Haskell2010 build-depends: base >=4.12 && <4.13 executable e main-is: e.hs build-depends: mhm, base >=4.12 && <4.13 default-language: Haskell2010 test-suite t type: exitcode-stdio-1.0 main-is: t.hs build-depends: mhm, base >=4.12 && <4.13 default-language: Haskell2010 Library.hs: module Library where n :: Int n = 5 e.hs: module Main where import Library main = print n t.hs: module Main where import Library main = print n From mail at nh2.me Mon Aug 10 22:35:40 2020 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 11 Aug 2020 00:35:40 +0200 Subject: Cabal: N executables without building all the modules N times? In-Reply-To: References: Message-ID: Hey James, yes, if you want to avoid duplicate compilation, you need to define a `library` with a separate `hs-source-dirs` and use that library in the `build-depends` of your executables. See https://stackoverflow.com/questions/12305970/how-to-make-a-haskell-cabal-project-with-libraryexecutables-that-still-run-with and the linked question. Cheers, Niklas From ekmett at gmail.com Wed Aug 12 18:00:01 2020 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 12 Aug 2020 11:00:01 -0700 Subject: Deprecating fromIntegral In-Reply-To: References: <758B8329-1EF2-47ED-A6C7-163C3B74C3D9@gmail.com> Message-ID: On Mon, Aug 10, 2020 at 12:15 AM Bardur Arantsson wrote: > Why would a massive overhaul be necessary for deprecation? If that's the > case then there's a deeper more serious underlying issue around > deprecation, IMO. > Deprecation would at a minimum first require us to offer an alternative, let that percolate through code that uses base and only then apply the deprecation pragma under the 3 release policy. Otherwise there isn't a path for users to write code that compiles without warnings without pragmas. Keep in mind fromIntegral is a Haskell Report facing change, so it is the sort of thing that the CLC tends to hold to a higher bar still. Changes to it will per force invalidate a lot of training material. I'm not really weighing in on if this is a good or a bad change with that, just that there is a non-trivial amount of process to enacting it if the community does decide to move forward. That isn't "a serious underlying issue", so much as establishing a baseline of stability and usability. I'm personally +1 on the *idea* that it'd be good to find a solution that allows you to safely identify whether you want a coercion that can truncate or not. The issue re-raised is a good one. But I don't happen to like the solution that was offered when the corpse of this issue was disinterred. Now for where I think this proposal at least insofar as it references a concrete plan of attack falls down, fromIntegral leans on toInteger and fromInteger as a common intermediary. This allows O(n) instances where the modules linked by Niklas would require O(n^2). But it is even worse than that. As a library author I don't need to know your integral type to be able to fromIntegral from mine to yours today, but I really would in a world where that became unavailable. Both of the modules linked use fundep-less multi-parameter typeclasses, which means type inference for them is terrible, (and even in the single parameter type class case we have, remember, with Num, defaulting can kick in, we're even farther removed from such a safety net here) and instances will easily accidentally overlap the moment you go to define something like From a (Forward a), making this problem even worse for any non-trivial numeric type. So in light of that I'm personally strongly -1 on the concrete set of actions being proposed untless a decent API can be found that scales, doesn't have those problems, and could be written into something like a Haskell Report without dipping into language extensions we haven't formalized in a Report. That isn't to say there isn't some variant of a proposal that can't be found, but I'm having a hard time satisfying all the constraints that a Prelude facing change really should meet. Now, that isn't to say something can't be done, for instance, weaker compromises like providing a module in base with safer casts and the like could perhaps use whatever language extensions were suited to the task, as there you have a lot more freedom to use more modern Haskell. But even there I'd still like to see a way that factors the O(n^2) cases into O(n) and which doesn't block any decently polymorphic numeric types behind overlapping instances and don't make type inference go to hell. So, let's see if we can't find a proposal that doesn't violate the gauntlet of constraints imposed above. Off the cuff: Add a variant of fromInteger to Num that returns Maybe a. class Num a where ... fromIntegerMaybe :: Integer -> Maybe a fromIntegerMaybe a = Just (fromInteger a) Modify the existing instances of Num to implement this extra member, and having it return Nothing if the Integer is out of bounds. As a concrete point in the proposal design space, keep the existing fromInteger/fromIntegral as a wrapping conversion. Why? It dodges the Haskell Report change. Others might disagree. I'm just trying to offer fuel for the debate that takes it in a productive direction. fromIntegralMaybe :: (Integral a, Num b) => a -> Maybe b fromIntegralMaybe = fromIntegerMaybe . toInteger can now be used for safe conversions. There are 3 target semantics one might reasonably want to see in their code: 1.) Wrapping (existing fromIntegral) 2.) throwing an exception on overflow (via some wrapping combinator that just handles the above Maybe) 3.) Return Nothing so the error can be handled in pure code. Each can be build on top of fromIntegral and fromIntegralMaybe. Room for variation: * fromIntegralMaybe could be switched to something like Integer -> Either String a, which would let you give back an error message saying why you didn't like the Integer. * fromIntegralWrapped could be added explicitly, and then after a suitable period fromIntegral could be deprecated, but this would require an annoying dance where users would have to switch what they define, which is non-trivial to the point of ner impossibility under the 3 release policy, so I'd personally not go that way, but hey, it is not my call. * Shorter names might be nice. fromIntegral is long enough that we get mocked by other language communities. Adding more words to the combinator name here compounds that issue. I mention this because something along these lines would address the substance of the issue here without inducing the horrible unusability that I feel the concrete proposal offered here would create. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From falsifian at falsifian.org Wed Aug 12 21:52:14 2020 From: falsifian at falsifian.org (James Cook) Date: Wed, 12 Aug 2020 21:52:14 +0000 Subject: Cabal: N executables without building all the modules N times? In-Reply-To: References: Message-ID: <6a6b8bb2-ee46-4ab4-dd55-62b9d2a51a58@falsifian.org> On 2020-08-10 22:35, Niklas Hambüchen wrote: > Hey James, > > yes, if you want to avoid duplicate compilation, you need to define a `library` with a separate `hs-source-dirs` and use that library in the `build-depends` of your executables. > > See > > https://stackoverflow.com/questions/12305970/how-to-make-a-haskell-cabal-project-with-libraryexecutables-that-still-run-with > > and the linked question. > > Cheers, > Niklas Thanks! That's exactly the information I was looking for. I guess I did not use the right search terms. -- James From andrew.lelechenko at gmail.com Thu Aug 13 13:09:59 2020 From: andrew.lelechenko at gmail.com (Andrew Lelechenko) Date: Thu, 13 Aug 2020 14:09:59 +0100 Subject: Deprecating fromIntegral In-Reply-To: References: Message-ID: <7B2D4736-7443-4557-BE37-CB32A7F092B0@gmail.com> > From: Edward Kmett > > fromIntegralMaybe :: (Integral a, Num b) => a -> Maybe b > fromIntegralMaybe = fromIntegerMaybe . toInteger I’m late to the party, so might be missing something. Did someone already proposed using http://hackage.haskell.org/package/base-4.14.0.0/docs/Data-Bits.html#v:toIntegralSized, which has a very similar signature? toIntegralSized :: (Integral a, Integral b, Bits a, Bits b) => a -> Maybe b No need for O(n^2) instances and it is already in `base`. Best regards, Andrew From mikolaj at well-typed.com Thu Aug 13 15:09:23 2020 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Thu, 13 Aug 2020 17:09:23 +0200 Subject: Deprecating fromIntegral In-Reply-To: <7B2D4736-7443-4557-BE37-CB32A7F092B0@gmail.com> References: <7B2D4736-7443-4557-BE37-CB32A7F092B0@gmail.com> Message-ID: This doesn't work when the target type is, e.g., Double (Num, but not Integral), but thank you for the tip, I've already used it in my code to fail when wrapping, etc., would occur for integral types. On Thu, Aug 13, 2020 at 3:10 PM Andrew Lelechenko wrote: > > > From: Edward Kmett > > > > fromIntegralMaybe :: (Integral a, Num b) => a -> Maybe b > > fromIntegralMaybe = fromIntegerMaybe . toInteger > > I’m late to the party, so might be missing something. Did someone already proposed using http://hackage.haskell.org/package/base-4.14.0.0/docs/Data-Bits.html#v:toIntegralSized, which has a very similar signature? > > toIntegralSized :: (Integral a, Integral b, Bits a, Bits b) => a -> Maybe b > > No need for O(n^2) instances and it is already in `base`. > > Best regards, > Andrew > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From hvr at gnu.org Thu Aug 13 15:48:08 2020 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Thu, 13 Aug 2020 17:48:08 +0200 Subject: Deprecating fromIntegral In-Reply-To: References: <7B2D4736-7443-4557-BE37-CB32A7F092B0@gmail.com> Message-ID: <87d03uczt3.fsf@gnu.org> Fwiw, this was proposed back then in 2014, you can find the libraries thread and the patch over at - https://mail.haskell.org/pipermail/libraries/2014-November/024383.html - https://gitlab.haskell.org/ghc/ghc/-/issues/9816 respectively. It might also be worth pointing out the function added to `base` originated from my package `int-cast` - https://hackage.haskell.org/package/int-cast-0.2.0.0/docs/Data-IntCast.html which provides the means to have compile-time verified "safe" (and also a slightly weaker lossless "iso"morphic) integer conversions without requiring O(n^2) instances. I typically use `int-cast` for critical code where I need more assurance and want to prove that my integer conversions are safe. -- hvr Mikolaj Konarski writes: > This doesn't work when the target type is, e.g., Double (Num, but not > Integral), but thank you for the tip, I've already used it in my code > to fail when wrapping, etc., would occur for integral types. > > On Thu, Aug 13, 2020 at 3:10 PM Andrew Lelechenko > wrote: >> >> > From: Edward Kmett >> > >> > fromIntegralMaybe :: (Integral a, Num b) => a -> Maybe b >> > fromIntegralMaybe = fromIntegerMaybe . toInteger >> >> I’m late to the party, so might be missing something. Did someone already proposed using http://hackage.haskell.org/package/base-4.14.0.0/docs/Data-Bits.html#v:toIntegralSized, which has a very similar signature? >> >> toIntegralSized :: (Integral a, Integral b, Bits a, Bits b) => a -> Maybe b >> >> No need for O(n^2) instances and it is already in `base`. >> >> Best regards, >> Andrew >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From ben.franksen at online.de Thu Aug 13 16:54:04 2020 From: ben.franksen at online.de (Ben Franksen) Date: Thu, 13 Aug 2020 18:54:04 +0200 Subject: Deprecating fromIntegral In-Reply-To: <87d03uczt3.fsf@gnu.org> References: <7B2D4736-7443-4557-BE37-CB32A7F092B0@gmail.com> <87d03uczt3.fsf@gnu.org> Message-ID: Am 13.08.20 um 17:48 schrieb Herbert Valerio Riedel: > It might also be worth pointing out the function added to `base` > originated from my package `int-cast` > > - https://hackage.haskell.org/package/int-cast-0.2.0.0/docs/Data-IntCast.html > > which provides the means to have compile-time verified "safe" (and also > a slightly weaker lossless "iso"morphic) integer conversions without > requiring O(n^2) instances. This is pretty cool. Thanks for sharing. Cheers Ben From david.feuer at gmail.com Thu Aug 13 17:26:15 2020 From: david.feuer at gmail.com (David Feuer) Date: Thu, 13 Aug 2020 13:26:15 -0400 Subject: Operator precedence help Message-ID: I'm trying to work out appropriate precedences for operators and pattern synonyms in my brand-new compact-sequences package. I currently have stacks and queues, but I will soon have deques, so let's pretend. For consistency, operators will match pattern synonyms. (<|), pattern (:<) :: a -> Deque a -> Deque a (|>), pattern (:>) :: Deque a -> a -> Deque a :< and :> need to have different precedence to allow things like a :< b :< xs :> c :> d to work nicely, but what numbers should I pick? I also have cons and snoc functions. Should I give their backticked spellings fixity declarations? If so, with what precedences? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikolaj at well-typed.com Thu Aug 13 17:29:49 2020 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Thu, 13 Aug 2020 19:29:49 +0200 Subject: Deprecating fromIntegral In-Reply-To: References: <7B2D4736-7443-4557-BE37-CB32A7F092B0@gmail.com> <87d03uczt3.fsf@gnu.org> Message-ID: Herbert, now I'm a fan of your package. However, I'm getting this when trying to intCast Int64 to Rational: engine-src/Game/LambdaHack/Common/Time.hs:265:19: error: … • Couldn't match type ‘int-cast-0.2.0.0:Data.IntCast.IsIntBaseSubType ('int-cast-0.2.0.0:Data.IntCast.FixedIntTag 64) (int-cast-0.2.0.0:Data.IntCast.IntBaseType Rational)’ with ‘'True’ arising from a use of ‘intCast’ • In the expression: intCast :: Int64 -> Rational In the first argument of ‘(*)’, namely ‘(intCast :: Int64 -> Rational) v’ In the second argument of ‘($)’, namely ‘(intCast :: Int64 -> Rational) v * s’ On Thu, Aug 13, 2020 at 6:54 PM Ben Franksen wrote: > > Am 13.08.20 um 17:48 schrieb Herbert Valerio Riedel: > > It might also be worth pointing out the function added to `base` > > originated from my package `int-cast` > > > > - https://hackage.haskell.org/package/int-cast-0.2.0.0/docs/Data-IntCast.html > > > > which provides the means to have compile-time verified "safe" (and also > > a slightly weaker lossless "iso"morphic) integer conversions without > > requiring O(n^2) instances. > > This is pretty cool. Thanks for sharing. > > Cheers > Ben > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From andreas.abel at ifi.lmu.de Thu Aug 13 18:49:45 2020 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Thu, 13 Aug 2020 20:49:45 +0200 Subject: Operator precedence help In-Reply-To: References: Message-ID: My hunch would be too look at what the others do to form an opinion. On 2020-08-13 19:26, David Feuer wrote: > I'm trying to work out appropriate precedences for operators and pattern > synonyms in my brand-new compact-sequences package. I currently have > stacks and queues, but I will soon have deques, so let's pretend. For > consistency, operators will match pattern synonyms. > > (<|), pattern (:<) :: a -> Deque a -> Deque a > (|>), pattern (:>) :: Deque a -> a -> Deque a > > :< and :> need to have different precedence to allow things like > >   a :< b :< xs :> c :> d > > to work nicely, but what numbers should I pick? > > I also have cons and snoc functions. Should I give their backticked > spellings fixity declarations? If so, with what precedences? > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From howard.b.golden at gmail.com Thu Aug 13 19:18:08 2020 From: howard.b.golden at gmail.com (Howard B. Golden) Date: Thu, 13 Aug 2020 12:18:08 -0700 Subject: Please update the Library_submissions page on the Haskell Wiki Message-ID: Hi, The Library_submissions page is out-of-date due to dead links to old servers (e.g. GHC Trac). Please update the links to help users contact the right maintainer or issue tracker. Thanks. Howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From howard.b.golden at gmail.com Thu Aug 13 21:56:12 2020 From: howard.b.golden at gmail.com (Howard B. Golden) Date: Thu, 13 Aug 2020 14:56:12 -0700 Subject: "Core Libraries Committee" vs. "Library_submissions" page Message-ID: It appears that the Haskell Wiki's "Core Libraries Committee " page is up-to-date. There is a lot of (out-of-date) redundancy on the "Library_submissions " page. Perhaps these pages can be rearranged. I suggest that the "Core Libraries Committee" page identify the administration performed by the committee. The specific links to Libraries might be better in the "Library_submissions" (perhaps just renamed "Libraries") with a link from the CLC page. (Or maybe transcluded?) Howard -------------- next part -------------- An HTML attachment was scrubbed... URL: From chessai1996 at gmail.com Thu Aug 13 22:14:35 2020 From: chessai1996 at gmail.com (chessai) Date: Thu, 13 Aug 2020 15:14:35 -0700 Subject: "Core Libraries Committee" vs. "Library_submissions" page In-Reply-To: References: Message-ID: Howard, This is being worked on. I updated the (very long out-of-date) CLC page about two weeks ago. The remaining details in the libraries page need to be merged in and the libraries page removed. I am low-bandwidth right now due to moving cross-country in the U.S., but this is a priority. Thank you. On Thu, Aug 13, 2020, 2:56 PM Howard B. Golden wrote: > It appears that the Haskell Wiki's "Core Libraries Committee > " page is up-to-date. > There is a lot of (out-of-date) redundancy on the "Library_submissions > " page. > > Perhaps these pages can be rearranged. I suggest that the "Core Libraries > Committee" page identify the administration performed by the committee. The > specific links to Libraries might be better in the "Library_submissions" > (perhaps just renamed "Libraries") with a link from the CLC page. (Or maybe > transcluded?) > > Howard > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Fri Aug 14 21:38:27 2020 From: sandy at sandymaguire.me (Sandy Maguire) Date: Fri, 14 Aug 2020 14:38:27 -0700 Subject: clamp function in base Message-ID: Hi all, It seems to me that base is missing the very standard function `clamp :: Ord a => a -> a -> a -> a`: ```haskell clamp :: Ord a => a -> a -> a -> a clamp low high = min high .max low ``` I propose it be added to Data.Ord. It's useful, generic, and non-trivial to get right (the "big" number goes with "min" -- causes me cognitive dissonance every time.) Thanks, Sandy -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Aug 15 00:45:27 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 14 Aug 2020 20:45:27 -0400 Subject: clamp function in base In-Reply-To: References: Message-ID: hey sandy! i absolutely support this, theres one gotcha to this definition, handling nans! I also think that this is version of the definition you propose may benefit from being written less point free (eg = \ val -> min high $ max low a) for clarity and for how ghc optimizes theres several ways we could make it play nice with nans, but maybe this should go in as is, to force me to get irate about ord for floats and finish some long overdue patches to Ord on Float and double :) either way, please throw a PR onto gitlab and @ myself and other folks for review On Fri, Aug 14, 2020 at 5:38 PM Sandy Maguire wrote: > Hi all, > > It seems to me that base is missing the very standard function `clamp :: > Ord a => a -> a -> a -> a`: > > ```haskell > clamp :: Ord a => a -> a -> a -> a > clamp low high = min high .max low > ``` > > I propose it be added to Data.Ord. It's useful, generic, and non-trivial > to get right (the "big" number goes with "min" -- causes me cognitive > dissonance every time.) > > Thanks, > Sandy > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Sat Aug 15 03:30:24 2020 From: sandy at sandymaguire.me (Sandy Maguire) Date: Fri, 14 Aug 2020 20:30:24 -0700 Subject: clamp function in base In-Reply-To: References: Message-ID: Yay! I've opened !3876 at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3876 On Fri, Aug 14, 2020 at 5:45 PM Carter Schonwald wrote: > > > hey sandy! > i absolutely support this, > > theres one gotcha to this definition, handling nans! I also think that > this is version of the definition you propose may benefit from being > written less point free (eg = \ val -> min high $ max low a) for clarity > and for how ghc optimizes > > theres several ways we could make it play nice with nans, but maybe this > should go in as is, to force me to get irate about ord for floats and > finish some long overdue patches to Ord on Float and double :) > > either way, please throw a PR onto gitlab and @ myself and other folks for > review > > > > On Fri, Aug 14, 2020 at 5:38 PM Sandy Maguire > wrote: > >> Hi all, >> >> It seems to me that base is missing the very standard function `clamp :: >> Ord a => a -> a -> a -> a`: >> >> ```haskell >> clamp :: Ord a => a -> a -> a -> a >> clamp low high = min high .max low >> ``` >> >> I propose it be added to Data.Ord. It's useful, generic, and non-trivial >> to get right (the "big" number goes with "min" -- causes me cognitive >> dissonance every time.) >> >> Thanks, >> Sandy >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Aug 15 03:56:19 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 14 Aug 2020 23:56:19 -0400 Subject: clamp function in base In-Reply-To: References: Message-ID: Wonderful! Your aside about inverted Lo and hi clamping arguments does raise a fun question about strictness for the value argument! When low and hi are equal, the result is constant. Should the value arg be strict or lazy when it’s essentially the constant function? On Fri, Aug 14, 2020 at 11:30 PM Sandy Maguire wrote: > Yay! I've opened !3876 at > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3876 > > On Fri, Aug 14, 2020 at 5:45 PM Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> >> >> hey sandy! >> i absolutely support this, >> >> theres one gotcha to this definition, handling nans! I also think that >> this is version of the definition you propose may benefit from being >> written less point free (eg = \ val -> min high $ max low a) for clarity >> and for how ghc optimizes >> >> theres several ways we could make it play nice with nans, but maybe this >> should go in as is, to force me to get irate about ord for floats and >> finish some long overdue patches to Ord on Float and double :) >> >> either way, please throw a PR onto gitlab and @ myself and other folks >> for review >> >> >> >> On Fri, Aug 14, 2020 at 5:38 PM Sandy Maguire >> wrote: >> >>> Hi all, >>> >>> It seems to me that base is missing the very standard function `clamp :: >>> Ord a => a -> a -> a -> a`: >>> >>> ```haskell >>> clamp :: Ord a => a -> a -> a -> a >>> clamp low high = min high .max low >>> ``` >>> >>> I propose it be added to Data.Ord. It's useful, generic, and non-trivial >>> to get right (the "big" number goes with "min" -- causes me cognitive >>> dissonance every time.) >>> >>> Thanks, >>> Sandy >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sat Aug 15 05:39:41 2020 From: david.feuer at gmail.com (David Feuer) Date: Sat, 15 Aug 2020 01:39:41 -0400 Subject: clamp function in base In-Reply-To: References: Message-ID: Strict. Making the function lazy to be extra efficient when given useless arguments means demand analysis will be worse for no good reason. On Fri, Aug 14, 2020 at 11:56 PM Carter Schonwald wrote: > > Wonderful! Your aside about inverted Lo and hi clamping arguments does raise a fun question about strictness for the value argument! When low and hi are equal, the result is constant. Should the value arg be strict or lazy when it’s essentially the constant function? > > On Fri, Aug 14, 2020 at 11:30 PM Sandy Maguire wrote: >> >> Yay! I've opened !3876 at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3876 >> >> On Fri, Aug 14, 2020 at 5:45 PM Carter Schonwald wrote: >>> >>> >>> >>> hey sandy! >>> i absolutely support this, >>> >>> theres one gotcha to this definition, handling nans! I also think that this is version of the definition you propose may benefit from being written less point free (eg = \ val -> min high $ max low a) for clarity and for how ghc optimizes >>> >>> theres several ways we could make it play nice with nans, but maybe this should go in as is, to force me to get irate about ord for floats and finish some long overdue patches to Ord on Float and double :) >>> >>> either way, please throw a PR onto gitlab and @ myself and other folks for review >>> >>> >>> >>> On Fri, Aug 14, 2020 at 5:38 PM Sandy Maguire wrote: >>>> >>>> Hi all, >>>> >>>> It seems to me that base is missing the very standard function `clamp :: Ord a => a -> a -> a -> a`: >>>> >>>> ```haskell >>>> clamp :: Ord a => a -> a -> a -> a >>>> clamp low high = min high .max low >>>> ``` >>>> >>>> I propose it be added to Data.Ord. It's useful, generic, and non-trivial to get right (the "big" number goes with "min" -- causes me cognitive dissonance every time.) >>>> >>>> Thanks, >>>> Sandy >>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From ivan.miljenovic at gmail.com Sat Aug 15 09:23:38 2020 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Sat, 15 Aug 2020 17:23:38 +0800 Subject: clamp function in base In-Reply-To: References: Message-ID: At the risk of bikeshedding, I don't think "clamp" is a very descriptive/findable name for this. What about "bounded"? Otherwise, +1 On Sat, 15 Aug 2020 at 13:40, David Feuer wrote: > Strict. Making the function lazy to be extra efficient when given > useless arguments means demand analysis will be worse for no good > reason. > > On Fri, Aug 14, 2020 at 11:56 PM Carter Schonwald > wrote: > > > > Wonderful! Your aside about inverted Lo and hi clamping arguments does > raise a fun question about strictness for the value argument! When low > and hi are equal, the result is constant. Should the value arg be strict or > lazy when it’s essentially the constant function? > > > > On Fri, Aug 14, 2020 at 11:30 PM Sandy Maguire > wrote: > >> > >> Yay! I've opened !3876 at > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3876 > >> > >> On Fri, Aug 14, 2020 at 5:45 PM Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >>> > >>> > >>> > >>> hey sandy! > >>> i absolutely support this, > >>> > >>> theres one gotcha to this definition, handling nans! I also think > that this is version of the definition you propose may benefit from being > written less point free (eg = \ val -> min high $ max low a) for clarity > and for how ghc optimizes > >>> > >>> theres several ways we could make it play nice with nans, but maybe > this should go in as is, to force me to get irate about ord for floats and > finish some long overdue patches to Ord on Float and double :) > >>> > >>> either way, please throw a PR onto gitlab and @ myself and other folks > for review > >>> > >>> > >>> > >>> On Fri, Aug 14, 2020 at 5:38 PM Sandy Maguire > wrote: > >>>> > >>>> Hi all, > >>>> > >>>> It seems to me that base is missing the very standard function `clamp > :: Ord a => a -> a -> a -> a`: > >>>> > >>>> ```haskell > >>>> clamp :: Ord a => a -> a -> a -> a > >>>> clamp low high = min high .max low > >>>> ``` > >>>> > >>>> I propose it be added to Data.Ord. It's useful, generic, and > non-trivial to get right (the "big" number goes with "min" -- causes me > cognitive dissonance every time.) > >>>> > >>>> Thanks, > >>>> Sandy > >>>> _______________________________________________ > >>>> Libraries mailing list > >>>> Libraries at haskell.org > >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lennart at augustsson.net Sat Aug 15 09:27:14 2020 From: lennart at augustsson.net (Lennart Augustsson) Date: Sat, 15 Aug 2020 02:27:14 -0700 Subject: clamp function in base In-Reply-To: References: Message-ID: I would pick the name clamp, it is quite common. And I have often wished for this to be in base. On Sat, Aug 15, 2020, 02:24 Ivan Lazar Miljenovic wrote: > At the risk of bikeshedding, I don't think "clamp" is a very > descriptive/findable name for this. What about "bounded"? > > Otherwise, +1 > > On Sat, 15 Aug 2020 at 13:40, David Feuer wrote: > >> Strict. Making the function lazy to be extra efficient when given >> useless arguments means demand analysis will be worse for no good >> reason. >> >> On Fri, Aug 14, 2020 at 11:56 PM Carter Schonwald >> wrote: >> > >> > Wonderful! Your aside about inverted Lo and hi clamping arguments does >> raise a fun question about strictness for the value argument! When low >> and hi are equal, the result is constant. Should the value arg be strict or >> lazy when it’s essentially the constant function? >> > >> > On Fri, Aug 14, 2020 at 11:30 PM Sandy Maguire >> wrote: >> >> >> >> Yay! I've opened !3876 at >> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3876 >> >> >> >> On Fri, Aug 14, 2020 at 5:45 PM Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> >> >>> >> >>> >> >>> hey sandy! >> >>> i absolutely support this, >> >>> >> >>> theres one gotcha to this definition, handling nans! I also think >> that this is version of the definition you propose may benefit from being >> written less point free (eg = \ val -> min high $ max low a) for clarity >> and for how ghc optimizes >> >>> >> >>> theres several ways we could make it play nice with nans, but maybe >> this should go in as is, to force me to get irate about ord for floats and >> finish some long overdue patches to Ord on Float and double :) >> >>> >> >>> either way, please throw a PR onto gitlab and @ myself and other >> folks for review >> >>> >> >>> >> >>> >> >>> On Fri, Aug 14, 2020 at 5:38 PM Sandy Maguire >> wrote: >> >>>> >> >>>> Hi all, >> >>>> >> >>>> It seems to me that base is missing the very standard function >> `clamp :: Ord a => a -> a -> a -> a`: >> >>>> >> >>>> ```haskell >> >>>> clamp :: Ord a => a -> a -> a -> a >> >>>> clamp low high = min high .max low >> >>>> ``` >> >>>> >> >>>> I propose it be added to Data.Ord. It's useful, generic, and >> non-trivial to get right (the "big" number goes with "min" -- causes me >> cognitive dissonance every time.) >> >>>> >> >>>> Thanks, >> >>>> Sandy >> >>>> _______________________________________________ >> >>>> Libraries mailing list >> >>>> Libraries at haskell.org >> >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > >> > _______________________________________________ >> > Libraries mailing list >> > Libraries at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > http://IvanMiljenovic.wordpress.com > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Sat Aug 15 09:49:36 2020 From: svenpanne at gmail.com (Sven Panne) Date: Sat, 15 Aug 2020 11:49:36 +0200 Subject: clamp function in base In-Reply-To: References: Message-ID: Am Sa., 15. Aug. 2020 um 11:28 Uhr schrieb Lennart Augustsson < lennart at augustsson.net>: > I would pick the name clamp, it is quite common. > I would go even further, it is *the* canonical name I would google first, see e.g. https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/clamp.xhtml https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-clamp https://developer.download.nvidia.com/cg/clamp.html No need for creative bikeshedding here when there is already a standard name. And I have often wished for this to be in base. > +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Sat Aug 15 09:57:43 2020 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Sat, 15 Aug 2020 11:57:43 +0200 Subject: clamp function in base In-Reply-To: References: Message-ID: <87364os02w.fsf@hvr.gnu.org> > It seems to me that base is missing the very standard function `clamp :: > Ord a => a -> a -> a -> a`: > > ```haskell > clamp :: Ord a => a -> a -> a -> a > clamp low high = min high .max low > ``` > > I propose it be added to Data.Ord. It's useful, generic, and non-trivial to > get right (the "big" number goes with "min" -- causes me cognitive > dissonance every time.) I'm -1 on the proposed type-signature. For one, while the motivation mentions cognitive overhead, it's ironic that the proposed :: Ord a => a -> a -> a -> a with three `a`-typed parameters whose grouping/semantics is all but obvious if you merely see the type-signature without knowing its implementation is itself a source of cognitive overhead IMO. On the other hand, there are already related functions taking lower/upper bounds defined by the Haskell Report, see https://www.haskell.org/onlinereport/haskell2010/haskellch19.html#x27-22500019 and so it'd help to kill two birds with one stone by align the proposed `clamp` with the precedent established by the pre-existing functions in Data.Ix, such as ,---- | class Ord a => Ix a where | | range :: (a, a) -> [a] | The list of values in the subrange defined by a bounding pair. | | index :: (a, a) -> a -> Int | The position of a subscript in the subrange. | | inRange :: (a, a) -> a -> Bool | Returns True the given subscript lies in the range defined the bounding pair. | | rangeSize :: (a, a) -> Int | The size of the subrange defined by a bounding pair. `---- So by grouping the type-signature like clamp :: Ord a => (a,a) -> a -> a or even clamp :: Ord a => a -> (a,a) -> a it becomes a lot more obvious which parameters are the bounds and which is the subject it's operating upon and it's IMO less error-prone as there's now less risk to accidentally swap parameters around. Moreover, this turns `clamp` into a function taking two parameters, thereby allowing `clamp` to be used as infix operator (lower,upper) `clamp` x Having lower/upper bounds represented as tuple also makes it easier to define constants denoting the bounds, c.f. | ... = f (clamp int7Bounds x) ... | where | int7Bounds = (0,127) Long story short, I'm -1 on the proposed type-signature; I'm not against using a type-signature which groups the bounds-parameter into a tuple in alignment with the "Data.Ix" API. -- hvr -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 227 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Aug 15 13:13:38 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 15 Aug 2020 09:13:38 -0400 Subject: clamp function in base In-Reply-To: <87364os02w.fsf@hvr.gnu.org> References: <87364os02w.fsf@hvr.gnu.org> Message-ID: I like this tuple for the range syntax. Definitely nicer. I’d favor (a,a)->a->a now that hvr has pointed it out. (I think that order lends itself to slightly better specialization on partial applipkiation internally ) After sleeping on it, I absolutely agree with david about uniform strictness is the best option. I’ll help sandy reflect both of these things into the final code. And as lennart mentioned, Clamp is bog standard. And amusingly enough, the stranger bits of how min and max in the c standards interact with nans and infinities (avoiding rather than poisoning) are due to wanting clamping for rendering plots to be simple to write in c! On Sat, Aug 15, 2020 at 5:59 AM Herbert Valerio Riedel wrote: > > > It seems to me that base is missing the very standard function `clamp :: > > Ord a => a -> a -> a -> a`: > > > > ```haskell > > clamp :: Ord a => a -> a -> a -> a > > clamp low high = min high .max low > > ``` > > > > I propose it be added to Data.Ord. It's useful, generic, and non-trivial > to > > get right (the "big" number goes with "min" -- causes me cognitive > > dissonance every time.) > > I'm -1 on the proposed type-signature. > > For one, while the motivation mentions cognitive overhead, it's ironic > that the proposed > > :: Ord a => a -> a -> a -> a > > with three `a`-typed parameters whose grouping/semantics is all but > obvious if you merely see the type-signature without knowing its > implementation is itself a source of cognitive overhead IMO. > > On the other hand, there are already related functions taking > lower/upper bounds defined by the Haskell Report, see > > > https://www.haskell.org/onlinereport/haskell2010/haskellch19.html#x27-22500019 > > and so it'd help to kill two birds with one stone by align the proposed > `clamp` with the precedent established by the pre-existing functions in > Data.Ix, such as > > ,---- > | class Ord a => Ix a where > | > | range :: (a, a) -> [a] > | The list of values in the subrange defined by a bounding pair. > | > | index :: (a, a) -> a -> Int > | The position of a subscript in the subrange. > | > | inRange :: (a, a) -> a -> Bool > | Returns True the given subscript lies in the range defined the > bounding pair. > | > | rangeSize :: (a, a) -> Int > | The size of the subrange defined by a bounding pair. > `---- > > So by grouping the type-signature like > > clamp :: Ord a => (a,a) -> a -> a > > or even > > clamp :: Ord a => a -> (a,a) -> a > > it becomes a lot more obvious which parameters are the bounds and which > is the subject it's operating upon and it's IMO less error-prone as > there's now less risk to accidentally swap parameters around. > > Moreover, this turns `clamp` into a function taking two parameters, > thereby allowing `clamp` to be used as infix operator > > (lower,upper) `clamp` x > > Having lower/upper bounds represented as tuple also makes it easier to > define constants denoting the bounds, c.f. > > | ... = f (clamp int7Bounds x) ... > | where > | int7Bounds = (0,127) > > Long story short, I'm -1 on the proposed type-signature; I'm > not against using a type-signature which groups the bounds-parameter > into a tuple in alignment with the "Data.Ix" API. > > -- hvr > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From byorgey at gmail.com Sat Aug 15 14:28:44 2020 From: byorgey at gmail.com (Brent Yorgey) Date: Sat, 15 Aug 2020 09:28:44 -0500 Subject: clamp function in base In-Reply-To: References: <87364os02w.fsf@hvr.gnu.org> Message-ID: +1 for adding clamp :: Ord a => (a,a) -> a -> a. On Sat, Aug 15, 2020, 8:14 AM Carter Schonwald wrote: > I like this tuple for the range syntax. Definitely nicer. > > I’d favor (a,a)->a->a now that hvr has pointed it out. (I think that > order lends itself to slightly better specialization on partial > applipkiation internally ) > > After sleeping on it, I absolutely agree with david about uniform > strictness is the best option. I’ll help sandy reflect both of these > things into the final code. > > And as lennart mentioned, Clamp is bog standard. > > And amusingly enough, the stranger bits of how min and max in the c > standards interact with nans and infinities (avoiding rather than > poisoning) are due to wanting clamping for rendering plots to be simple to > write in c! > > On Sat, Aug 15, 2020 at 5:59 AM Herbert Valerio Riedel > wrote: > >> >> > It seems to me that base is missing the very standard function `clamp :: >> > Ord a => a -> a -> a -> a`: >> > >> > ```haskell >> > clamp :: Ord a => a -> a -> a -> a >> > clamp low high = min high .max low >> > ``` >> > >> > I propose it be added to Data.Ord. It's useful, generic, and >> non-trivial to >> > get right (the "big" number goes with "min" -- causes me cognitive >> > dissonance every time.) >> >> I'm -1 on the proposed type-signature. >> >> For one, while the motivation mentions cognitive overhead, it's ironic >> that the proposed >> >> :: Ord a => a -> a -> a -> a >> >> with three `a`-typed parameters whose grouping/semantics is all but >> obvious if you merely see the type-signature without knowing its >> implementation is itself a source of cognitive overhead IMO. >> >> On the other hand, there are already related functions taking >> lower/upper bounds defined by the Haskell Report, see >> >> >> https://www.haskell.org/onlinereport/haskell2010/haskellch19.html#x27-22500019 >> >> and so it'd help to kill two birds with one stone by align the proposed >> `clamp` with the precedent established by the pre-existing functions in >> Data.Ix, such as >> >> ,---- >> | class Ord a => Ix a where >> | >> | range :: (a, a) -> [a] >> | The list of values in the subrange defined by a bounding pair. >> | >> | index :: (a, a) -> a -> Int >> | The position of a subscript in the subrange. >> | >> | inRange :: (a, a) -> a -> Bool >> | Returns True the given subscript lies in the range defined the >> bounding pair. >> | >> | rangeSize :: (a, a) -> Int >> | The size of the subrange defined by a bounding pair. >> `---- >> >> So by grouping the type-signature like >> >> clamp :: Ord a => (a,a) -> a -> a >> >> or even >> >> clamp :: Ord a => a -> (a,a) -> a >> >> it becomes a lot more obvious which parameters are the bounds and which >> is the subject it's operating upon and it's IMO less error-prone as >> there's now less risk to accidentally swap parameters around. >> >> Moreover, this turns `clamp` into a function taking two parameters, >> thereby allowing `clamp` to be used as infix operator >> >> (lower,upper) `clamp` x >> >> Having lower/upper bounds represented as tuple also makes it easier to >> define constants denoting the bounds, c.f. >> >> | ... = f (clamp int7Bounds x) ... >> | where >> | int7Bounds = (0,127) >> >> Long story short, I'm -1 on the proposed type-signature; I'm >> not against using a type-signature which groups the bounds-parameter >> into a tuple in alignment with the "Data.Ix" API. >> >> -- hvr >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilypi at cohomolo.gy Sat Aug 15 15:22:07 2020 From: emilypi at cohomolo.gy (Emily Pillmore) Date: Sat, 15 Aug 2020 15:22:07 +0000 Subject: clamp function in base In-Reply-To: References: Message-ID: +1, though, David has good points about unnecessary laziness. I'm fine with the name and the signature ``` clamp :: Ord a ⇒ (a,a) → a → a ``` (or some variation on the theme). On Fri, Aug 14, 2020 at 5:38 PM, Sandy Maguire < sandy at sandymaguire.me > wrote: > > Hi all, > > > It seems to me that base is missing the very standard function `clamp :: > Ord a => a -> a -> a -> a`: > > > ```haskell > clamp :: Ord a => a -> a -> a -> a > clamp low high = min high .max low > > ``` > > > I propose it be added to Data.Ord. It's useful, generic, and non-trivial > to get right (the "big" number goes with "min" -- causes me cognitive > dissonance every time.) > > > Thanks, > > Sandy > > > _______________________________________________ > Libraries mailing list > Libraries@ haskell. org ( Libraries at haskell.org ) > http:/ / mail. haskell. org/ cgi-bin/ mailman/ listinfo/ libraries ( > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries ) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Sat Aug 15 18:19:01 2020 From: sandy at sandymaguire.me (Sandy Maguire) Date: Sat, 15 Aug 2020 11:19:01 -0700 Subject: clamp function in base In-Reply-To: References: Message-ID: Sounds good. For whatever reason the tupled arguments make me feel better about the `high wrote: > +1, though, David has good points about unnecessary laziness. I'm fine > with the name and the signature > > ``` > clamp :: Ord a ⇒ (a,a) → a → a > ``` > > (or some variation on the theme). > > > On Fri, Aug 14, 2020 at 5:38 PM, Sandy Maguire > wrote: > >> Hi all, >> >> It seems to me that base is missing the very standard function `clamp :: >> Ord a => a -> a -> a -> a`: >> >> ```haskell >> clamp :: Ord a => a -> a -> a -> a >> clamp low high = min high .max low >> ``` >> >> I propose it be added to Data.Ord. It's useful, generic, and non-trivial >> to get right (the "big" number goes with "min" -- causes me cognitive >> dissonance every time.) >> >> Thanks, >> Sandy >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sat Aug 15 21:52:32 2020 From: david.feuer at gmail.com (David Feuer) Date: Sat, 15 Aug 2020 17:52:32 -0400 Subject: Operator precedence help In-Reply-To: References: Message-ID: Data.Sequence uses the same precedence for both, which strikes me as a bit sad. Surprisingly, I am not seeing other packages on Hackage that define similar operators. On Thu, Aug 13, 2020 at 2:50 PM Andreas Abel wrote: > > My hunch would be too look at what the others do to form an opinion. > > On 2020-08-13 19:26, David Feuer wrote: > > I'm trying to work out appropriate precedences for operators and pattern > > synonyms in my brand-new compact-sequences package. I currently have > > stacks and queues, but I will soon have deques, so let's pretend. For > > consistency, operators will match pattern synonyms. > > > > (<|), pattern (:<) :: a -> Deque a -> Deque a > > (|>), pattern (:>) :: Deque a -> a -> Deque a > > > > :< and :> need to have different precedence to allow things like > > > > a :< b :< xs :> c :> d > > > > to work nicely, but what numbers should I pick? > > > > I also have cons and snoc functions. Should I give their backticked > > spellings fixity declarations? If so, with what precedences? > > > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From lemming at henning-thielemann.de Sun Aug 16 04:51:56 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Aug 2020 06:51:56 +0200 (CEST) Subject: Cabal: N executables without building all the modules N times? In-Reply-To: References: Message-ID: On Mon, 10 Aug 2020, James Cook wrote: > Hi libraries@, > > (Let me know if there's a better place for Cabal questions. This list is > linked from https://www.haskell.org/cabal/ as a place to ask questions.) > > > I have a project with lots of modules, and lots of executables that use > those modules. How should I structure my .cabal file if I want cabal to > only build the common modules once? > > > I have found I can do it as follows, but I am wondering if there's a way > to do it without step 1: > > 1. Create a "library_src" directory with the source for all the shared > modules. Put the source files for the executables somewhere else. > > 2. Create a .cabal file with one "library" stanza with "hs-source-dirs: > library_source", and several executable stanzas. I use to setup directory 'src' for the library, 'test' for test code and, say, 'example' for executables. > If I leave out step 1 and put all the source files in the same place, > then the compiler builds them separately for each executable, and I get > a "missing-home-modules" warning. > > Is it just a fact of life that I need to separate my source into > separate directories if I want this to work properly? Yes. Problem is, that Cabal calls GHC's 'make' feature and that one searches for appropriate modules in its search path. Btw. you give a Library section a name you get a private sub-library of private modules that can be shared between the public library and e.g. the test suite. From lemming at henning-thielemann.de Sun Aug 16 05:03:59 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Aug 2020 07:03:59 +0200 (CEST) Subject: Operator precedence help In-Reply-To: References: Message-ID: On Sat, 15 Aug 2020, David Feuer wrote: > Data.Sequence uses the same precedence for both, which strikes me as a bit sad. > Surprisingly, I am not seeing other packages on Hackage that define similar > operators. I have the same problem in the 'lapack' bindings. I like to allow people to write (row) vector -*# matrix #*# matrix #*| (column) vector I had no good idea, though, and also chose equal precedence for all operators. > On Thu, Aug 13, 2020 at 2:50 PM Andreas Abel wrote: >> >> My hunch would be too look at what the others do to form an opinion. >> >> On 2020-08-13 19:26, David Feuer wrote: >> > I'm trying to work out appropriate precedences for operators and pattern >> > synonyms in my brand-new compact-sequences package. I currently have >> > stacks and queues, but I will soon have deques, so let's pretend. For >> > consistency, operators will match pattern synonyms. >> > >> > (<|), pattern (:<) :: a -> Deque a -> Deque a >> > (|>), pattern (:>) :: Deque a -> a -> Deque a >> > >> > :< and :> need to have different precedence to allow things like >> > >> > a :< b :< xs :> c :> d >> > >> > to work nicely, but what numbers should I pick? >> > >> > I also have cons and snoc functions. Should I give their backticked >> > spellings fixity declarations? If so, with what precedences? I would give them the precedence of their infix counterparts. From david.feuer at gmail.com Sun Aug 16 05:26:27 2020 From: david.feuer at gmail.com (David Feuer) Date: Sun, 16 Aug 2020 01:26:27 -0400 Subject: Operator precedence help In-Reply-To: References: Message-ID: It sure does seem crowded around there. I'd love to have 4.5 or 5.5. Going up to 6 runs into arithmetic. Going down to 4 hits up against Functor and Applicative stuff, which is a tad unfortunate but I think probably not as bad in practice. So I think I'll go with 4 and 5. Thanks, y'all! On Sun, Aug 16, 2020, 1:04 AM Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Sat, 15 Aug 2020, David Feuer wrote: > > > Data.Sequence uses the same precedence for both, which strikes me as a > bit sad. > > Surprisingly, I am not seeing other packages on Hackage that define > similar > > operators. > > I have the same problem in the 'lapack' bindings. > > I like to allow people to write > > (row) vector -*# matrix #*# matrix #*| (column) vector > > > I had no good idea, though, and also chose equal precedence for all > operators. > > > > On Thu, Aug 13, 2020 at 2:50 PM Andreas Abel > wrote: > >> > >> My hunch would be too look at what the others do to form an opinion. > >> > >> On 2020-08-13 19:26, David Feuer wrote: > >> > I'm trying to work out appropriate precedences for operators and > pattern > >> > synonyms in my brand-new compact-sequences package. I currently have > >> > stacks and queues, but I will soon have deques, so let's pretend. For > >> > consistency, operators will match pattern synonyms. > >> > > >> > (<|), pattern (:<) :: a -> Deque a -> Deque a > >> > (|>), pattern (:>) :: Deque a -> a -> Deque a > >> > > >> > :< and :> need to have different precedence to allow things like > >> > > >> > a :< b :< xs :> c :> d > >> > > >> > to work nicely, but what numbers should I pick? > >> > > >> > I also have cons and snoc functions. Should I give their backticked > >> > spellings fixity declarations? If so, with what precedences? > > I would give them the precedence of their infix counterparts. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sun Aug 16 05:30:49 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Aug 2020 07:30:49 +0200 (CEST) Subject: Operator precedence help In-Reply-To: References: Message-ID: On Sun, 16 Aug 2020, David Feuer wrote: > It sure does seem crowded around there. I'd love to have 4.5 or 5.5. Going up to 6 runs into arithmetic. Going > down to 4 hits up against Functor and Applicative stuff, which is a tad unfortunate but I think probably not as > bad in practice. So I think I'll go with 4 and 5. Thanks, y'all! I would use 5 for cons, like : and ++ From david.feuer at gmail.com Sun Aug 16 05:42:21 2020 From: david.feuer at gmail.com (David Feuer) Date: Sun, 16 Aug 2020 01:42:21 -0400 Subject: Operator precedence help In-Reply-To: References: Message-ID: On Sun, Aug 16, 2020, 1:30 AM Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Sun, 16 Aug 2020, David Feuer wrote: > > > It sure does seem crowded around there. I'd love to have 4.5 or 5.5. > Going up to 6 runs into arithmetic. Going > > down to 4 hits up against Functor and Applicative stuff, which is a tad > unfortunate but I think probably not as > > bad in practice. So I think I'll go with 4 and 5. Thanks, y'all! > > I would use 5 for cons, like : and ++ > My queues use snoc and uncons. Using 5 for :< means using 4 for |>, which is the thing that can show up in an expression context and therefore clash with Applicative stuff. So I'd be tempted to go the other way. Of course, that doesn't help deques, but I want consistency. Where might that go wrong? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Sun Aug 16 05:46:14 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Aug 2020 07:46:14 +0200 (CEST) Subject: Operator precedence help In-Reply-To: References: Message-ID: On Sun, 16 Aug 2020, David Feuer wrote: > On Sun, Aug 16, 2020, 1:30 AM Henning Thielemann wrote: > > I would use 5 for cons, like : and ++ > > My queues use snoc and uncons. Using 5 for :< means using 4 for |>, which is the thing that can show up in an > expression context and therefore clash with Applicative stuff. So I'd be tempted to go the other way. Of course, > that doesn't help deques, but I want consistency. Where might that go wrong? I have no idea. Sometimes I think it would be better to omit infix operators and let people define custom infix operators as appropriate for their application. They still can do it by not importing the operators from the library. From lemming at henning-thielemann.de Sun Aug 16 09:43:20 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sun, 16 Aug 2020 11:43:20 +0200 (CEST) Subject: clamp function in base In-Reply-To: References: Message-ID: On Fri, 14 Aug 2020, Sandy Maguire wrote: > It seems to me that base is missing the very standard function `clamp :: Ord a => a -> a -> a -> a`: > > ```haskell > clamp :: Ord a => a -> a -> a -> a > clamp low high = min high .max low > ``` https://hackage.haskell.org/package/utility-ht-0.0.15/docs/Data-Ord-HT.html#v:limit From carter.schonwald at gmail.com Mon Aug 17 01:49:53 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 16 Aug 2020 21:49:53 -0400 Subject: Fractional precedences? Re: Operator precedence help In-Reply-To: References: Message-ID: I do think that the work needed to actually support fractional precedence in ghc is pretty minimal. Or at least I remember having a conversation about it a few years ago, and the conclusion was that adding precedence would be super easy to do, but just lacked any good motivating example from real libraries. David, maybe you could help with that from the examples side of things? On Sun, Aug 16, 2020 at 1:27 AM David Feuer wrote: > It sure does seem crowded around there. I'd love to have 4.5 or 5.5. Going > up to 6 runs into arithmetic. Going down to 4 hits up against Functor and > Applicative stuff, which is a tad unfortunate but I think probably not as > bad in practice. So I think I'll go with 4 and 5. Thanks, y'all! > > On Sun, Aug 16, 2020, 1:04 AM Henning Thielemann < > lemming at henning-thielemann.de> wrote: > >> >> On Sat, 15 Aug 2020, David Feuer wrote: >> >> > Data.Sequence uses the same precedence for both, which strikes me as a >> bit sad. >> > Surprisingly, I am not seeing other packages on Hackage that define >> similar >> > operators. >> >> I have the same problem in the 'lapack' bindings. >> >> I like to allow people to write >> >> (row) vector -*# matrix #*# matrix #*| (column) vector >> >> >> I had no good idea, though, and also chose equal precedence for all >> operators. >> >> >> > On Thu, Aug 13, 2020 at 2:50 PM Andreas Abel >> wrote: >> >> >> >> My hunch would be too look at what the others do to form an opinion. >> >> >> >> On 2020-08-13 19:26, David Feuer wrote: >> >> > I'm trying to work out appropriate precedences for operators and >> pattern >> >> > synonyms in my brand-new compact-sequences package. I currently have >> >> > stacks and queues, but I will soon have deques, so let's pretend. For >> >> > consistency, operators will match pattern synonyms. >> >> > >> >> > (<|), pattern (:<) :: a -> Deque a -> Deque a >> >> > (|>), pattern (:>) :: Deque a -> a -> Deque a >> >> > >> >> > :< and :> need to have different precedence to allow things like >> >> > >> >> > a :< b :< xs :> c :> d >> >> > >> >> > to work nicely, but what numbers should I pick? >> >> > >> >> > I also have cons and snoc functions. Should I give their backticked >> >> > spellings fixity declarations? If so, with what precedences? >> >> I would give them the precedence of their infix counterparts. >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Mon Aug 17 13:40:25 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Mon, 17 Aug 2020 15:40:25 +0200 (CEST) Subject: Fractional precedences? Re: Operator precedence help In-Reply-To: References: Message-ID: On Sun, 16 Aug 2020, Carter Schonwald wrote: > I do think that the work needed to actually support fractional > precedence in ghc is pretty minimal.  Or at least I remember having a > conversation about it a few years ago, and the conclusion was that >  adding precedence would be super easy to do, but just lacked any good > motivating example from real libraries.  I remember this discussion, too, and I guess that it was started by Simon Marlow and it ended with recalling that decades ago something more advanced was discussed: Groups of equal precedence and relations between the groups. But that one was too complicated to be implemented. From andrew.thaddeus at gmail.com Mon Aug 17 14:43:52 2020 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Mon, 17 Aug 2020 10:43:52 -0400 Subject: Null Pointer Pattern Message-ID: Foreign.Ptr provides nullPtr. It would make some of my code more terse if this was additionally provided as a pattern synonym. The pattern synonym can be defined as: {-# language ViewPatterns #-} {-# language PatternSynonyms #-} module NullPointerPattern ( pattern Null ) where import Foreign.Ptr (Ptr,nullPtr) pattern Null :: Ptr a pattern Null <- ((\x -> x == nullPtr) -> True) where Null = nullPtr Any here is example of code that becomes more terse once this is available: foo :: IO (Either Error (Ptr Foo)) foo = do p <- initialize mySettings if p == nullPtr then pure (Left InitializeFailure) else pure (Right p) With the pattern synonym, we are able to take advantage of LambdaCase: foo :: IO (Either Error (Ptr Foo)) foo = initialize mySettings >>= \case Null -> pure (Left InitializeFailure) p -> pure (Right p) I'm curious what others think. -- -Andrew Thaddeus Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Aug 17 14:52:03 2020 From: david.feuer at gmail.com (David Feuer) Date: Mon, 17 Aug 2020 10:52:03 -0400 Subject: Null Pointer Pattern In-Reply-To: References: Message-ID: In the context of GHC Haskell, that's definitely the Right Thing. I think what might concern some people is that pattern synonyms as we know them are a very GHC thing, while the Ptr business is pretty much Report Haskell. On Mon, Aug 17, 2020, 10:44 AM Andrew Martin wrote: > Foreign.Ptr provides nullPtr. It would make some of my code more terse if > this was additionally provided as a pattern synonym. The pattern synonym > can be defined as: > > {-# language ViewPatterns #-} > {-# language PatternSynonyms #-} > module NullPointerPattern > ( pattern Null > ) where > import Foreign.Ptr (Ptr,nullPtr) > pattern Null :: Ptr a > pattern Null <- ((\x -> x == nullPtr) -> True) > where Null = nullPtr > > Any here is example of code that becomes more terse once this is available: > > foo :: IO (Either Error (Ptr Foo)) > foo = do > p <- initialize mySettings > if p == nullPtr > then pure (Left InitializeFailure) > else pure (Right p) > > With the pattern synonym, we are able to take advantage of LambdaCase: > > foo :: IO (Either Error (Ptr Foo)) > foo = initialize mySettings >>= \case > Null -> pure (Left InitializeFailure) > p -> pure (Right p) > > I'm curious what others think. > > -- > -Andrew Thaddeus Martin > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.thaddeus at gmail.com Mon Aug 17 15:07:37 2020 From: andrew.thaddeus at gmail.com (Andrew Martin) Date: Mon, 17 Aug 2020 11:07:37 -0400 Subject: Null Pointer Pattern In-Reply-To: References: Message-ID: There is always the possibility of using GHC.Ptr as a home for it instead of Foreign.Ptr. The main reason I would want this in base rather than just some library is that, as a library author, I would never pick up an extra dependency for something that trivial. On Mon, Aug 17, 2020 at 10:52 AM David Feuer wrote: > In the context of GHC Haskell, that's definitely the Right Thing. I think > what might concern some people is that pattern synonyms as we know them are > a very GHC thing, while the Ptr business is pretty much Report Haskell. > > On Mon, Aug 17, 2020, 10:44 AM Andrew Martin > wrote: > >> Foreign.Ptr provides nullPtr. It would make some of my code more terse if >> this was additionally provided as a pattern synonym. The pattern synonym >> can be defined as: >> >> {-# language ViewPatterns #-} >> {-# language PatternSynonyms #-} >> module NullPointerPattern >> ( pattern Null >> ) where >> import Foreign.Ptr (Ptr,nullPtr) >> pattern Null :: Ptr a >> pattern Null <- ((\x -> x == nullPtr) -> True) >> where Null = nullPtr >> >> Any here is example of code that becomes more terse once this is >> available: >> >> foo :: IO (Either Error (Ptr Foo)) >> foo = do >> p <- initialize mySettings >> if p == nullPtr >> then pure (Left InitializeFailure) >> else pure (Right p) >> >> With the pattern synonym, we are able to take advantage of LambdaCase: >> >> foo :: IO (Either Error (Ptr Foo)) >> foo = initialize mySettings >>= \case >> Null -> pure (Left InitializeFailure) >> p -> pure (Right p) >> >> I'm curious what others think. >> >> -- >> -Andrew Thaddeus Martin >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > -- -Andrew Thaddeus Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Aug 17 15:12:35 2020 From: david.feuer at gmail.com (David Feuer) Date: Mon, 17 Aug 2020 11:12:35 -0400 Subject: Null Pointer Pattern In-Reply-To: References: Message-ID: GHC.Ptr feels like too low-level a zone for it. If the objections I predicted don't appear, Foreign.Ptr sure seems more logical. On Mon, Aug 17, 2020, 11:07 AM Andrew Martin wrote: > There is always the possibility of using GHC.Ptr as a home for it instead > of Foreign.Ptr. The main reason I would want this in base rather than just > some library is that, as a library author, I would never pick up an extra > dependency for something that trivial. > > On Mon, Aug 17, 2020 at 10:52 AM David Feuer > wrote: > >> In the context of GHC Haskell, that's definitely the Right Thing. I think >> what might concern some people is that pattern synonyms as we know them are >> a very GHC thing, while the Ptr business is pretty much Report Haskell. >> >> On Mon, Aug 17, 2020, 10:44 AM Andrew Martin >> wrote: >> >>> Foreign.Ptr provides nullPtr. It would make some of my code more terse >>> if this was additionally provided as a pattern synonym. The pattern synonym >>> can be defined as: >>> >>> {-# language ViewPatterns #-} >>> {-# language PatternSynonyms #-} >>> module NullPointerPattern >>> ( pattern Null >>> ) where >>> import Foreign.Ptr (Ptr,nullPtr) >>> pattern Null :: Ptr a >>> pattern Null <- ((\x -> x == nullPtr) -> True) >>> where Null = nullPtr >>> >>> Any here is example of code that becomes more terse once this is >>> available: >>> >>> foo :: IO (Either Error (Ptr Foo)) >>> foo = do >>> p <- initialize mySettings >>> if p == nullPtr >>> then pure (Left InitializeFailure) >>> else pure (Right p) >>> >>> With the pattern synonym, we are able to take advantage of LambdaCase: >>> >>> foo :: IO (Either Error (Ptr Foo)) >>> foo = initialize mySettings >>= \case >>> Null -> pure (Left InitializeFailure) >>> p -> pure (Right p) >>> >>> I'm curious what others think. >>> >>> -- >>> -Andrew Thaddeus Martin >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> > > -- > -Andrew Thaddeus Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 17 16:12:34 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 17 Aug 2020 12:12:34 -0400 Subject: Fractional precedences? Re: Operator precedence help In-Reply-To: References: Message-ID: Oh yeah! I feel like everyone’s wondered about that approach. But it definitely would need some experiments to validate. But in some ways it’d be super fascinating. On Mon, Aug 17, 2020 at 9:40 AM Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Sun, 16 Aug 2020, Carter Schonwald wrote: > > > I do think that the work needed to actually support fractional > > precedence in ghc is pretty minimal. Or at least I remember having a > > conversation about it a few years ago, and the conclusion was that > > adding precedence would be super easy to do, but just lacked any good > > motivating example from real libraries. > > I remember this discussion, too, and I guess that it was started by Simon > Marlow and it ended with recalling that decades ago something more > advanced was discussed: Groups of equal precedence and relations between > the groups. But that one was too complicated to be implemented. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 17 16:14:04 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 17 Aug 2020 12:14:04 -0400 Subject: Null Pointer Pattern In-Reply-To: References: Message-ID: There’s an interesting idea or two in here. Like, should we support having non-nullable pointers? On Mon, Aug 17, 2020 at 11:13 AM David Feuer wrote: > GHC.Ptr feels like too low-level a zone for it. If the objections I > predicted don't appear, Foreign.Ptr sure seems more logical. > > On Mon, Aug 17, 2020, 11:07 AM Andrew Martin > wrote: > >> There is always the possibility of using GHC.Ptr as a home for it instead >> of Foreign.Ptr. The main reason I would want this in base rather than just >> some library is that, as a library author, I would never pick up an extra >> dependency for something that trivial. >> >> On Mon, Aug 17, 2020 at 10:52 AM David Feuer >> wrote: >> >>> In the context of GHC Haskell, that's definitely the Right Thing. I >>> think what might concern some people is that pattern synonyms as we know >>> them are a very GHC thing, while the Ptr business is pretty much Report >>> Haskell. >>> >>> On Mon, Aug 17, 2020, 10:44 AM Andrew Martin >>> wrote: >>> >>>> Foreign.Ptr provides nullPtr. It would make some of my code more terse >>>> if this was additionally provided as a pattern synonym. The pattern synonym >>>> can be defined as: >>>> >>>> {-# language ViewPatterns #-} >>>> {-# language PatternSynonyms #-} >>>> module NullPointerPattern >>>> ( pattern Null >>>> ) where >>>> import Foreign.Ptr (Ptr,nullPtr) >>>> pattern Null :: Ptr a >>>> pattern Null <- ((\x -> x == nullPtr) -> True) >>>> where Null = nullPtr >>>> >>>> Any here is example of code that becomes more terse once this is >>>> available: >>>> >>>> foo :: IO (Either Error (Ptr Foo)) >>>> foo = do >>>> p <- initialize mySettings >>>> if p == nullPtr >>>> then pure (Left InitializeFailure) >>>> else pure (Right p) >>>> >>>> With the pattern synonym, we are able to take advantage of LambdaCase: >>>> >>>> foo :: IO (Either Error (Ptr Foo)) >>>> foo = initialize mySettings >>= \case >>>> Null -> pure (Left InitializeFailure) >>>> p -> pure (Right p) >>>> >>>> I'm curious what others think. >>>> >>>> -- >>>> -Andrew Thaddeus Martin >>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>> >>> >> >> -- >> -Andrew Thaddeus Martin >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Aug 17 17:54:06 2020 From: david.feuer at gmail.com (David Feuer) Date: Mon, 17 Aug 2020 13:54:06 -0400 Subject: Null Pointer Pattern In-Reply-To: References: Message-ID: Maybe, but not as part of this low-level API. We don't need *null* pointers to have *invalid* pointers, so that wouldn't give any safety benefit. Feel free to come up with a safe pointer API though; that could be interesting. You could even try using linear types to prevent deallocated memory from being used, though exceptions will probably be able to break that. On Mon, Aug 17, 2020, 12:14 PM Carter Schonwald wrote: > There’s an interesting idea or two in here. Like, should we support > having non-nullable pointers? > > On Mon, Aug 17, 2020 at 11:13 AM David Feuer > wrote: > >> GHC.Ptr feels like too low-level a zone for it. If the objections I >> predicted don't appear, Foreign.Ptr sure seems more logical. >> >> On Mon, Aug 17, 2020, 11:07 AM Andrew Martin >> wrote: >> >>> There is always the possibility of using GHC.Ptr as a home for it >>> instead of Foreign.Ptr. The main reason I would want this in base rather >>> than just some library is that, as a library author, I would never pick up >>> an extra dependency for something that trivial. >>> >>> On Mon, Aug 17, 2020 at 10:52 AM David Feuer >>> wrote: >>> >>>> In the context of GHC Haskell, that's definitely the Right Thing. I >>>> think what might concern some people is that pattern synonyms as we know >>>> them are a very GHC thing, while the Ptr business is pretty much Report >>>> Haskell. >>>> >>>> On Mon, Aug 17, 2020, 10:44 AM Andrew Martin >>>> wrote: >>>> >>>>> Foreign.Ptr provides nullPtr. It would make some of my code more terse >>>>> if this was additionally provided as a pattern synonym. The pattern synonym >>>>> can be defined as: >>>>> >>>>> {-# language ViewPatterns #-} >>>>> {-# language PatternSynonyms #-} >>>>> module NullPointerPattern >>>>> ( pattern Null >>>>> ) where >>>>> import Foreign.Ptr (Ptr,nullPtr) >>>>> pattern Null :: Ptr a >>>>> pattern Null <- ((\x -> x == nullPtr) -> True) >>>>> where Null = nullPtr >>>>> >>>>> Any here is example of code that becomes more terse once this is >>>>> available: >>>>> >>>>> foo :: IO (Either Error (Ptr Foo)) >>>>> foo = do >>>>> p <- initialize mySettings >>>>> if p == nullPtr >>>>> then pure (Left InitializeFailure) >>>>> else pure (Right p) >>>>> >>>>> With the pattern synonym, we are able to take advantage of LambdaCase: >>>>> >>>>> foo :: IO (Either Error (Ptr Foo)) >>>>> foo = initialize mySettings >>= \case >>>>> Null -> pure (Left InitializeFailure) >>>>> p -> pure (Right p) >>>>> >>>>> I'm curious what others think. >>>>> >>>>> -- >>>>> -Andrew Thaddeus Martin >>>>> _______________________________________________ >>>>> Libraries mailing list >>>>> Libraries at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>>> >>>> >>> >>> -- >>> -Andrew Thaddeus Martin >>> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lennart at augustsson.net Tue Aug 18 00:51:24 2020 From: lennart at augustsson.net (Lennart Augustsson) Date: Mon, 17 Aug 2020 17:51:24 -0700 Subject: Fractional precedences? Re: Operator precedence help In-Reply-To: References: Message-ID: The Pretty class (from Text.PrettyPrint.HughesPJClass) uses Rational. On Mon, Aug 17, 2020 at 9:12 AM Carter Schonwald wrote: > Oh yeah! > I feel like everyone’s wondered about that approach. But it definitely > would need some experiments to validate. But in some ways it’d be super > fascinating. > > On Mon, Aug 17, 2020 at 9:40 AM Henning Thielemann < > lemming at henning-thielemann.de> wrote: > >> >> On Sun, 16 Aug 2020, Carter Schonwald wrote: >> >> > I do think that the work needed to actually support fractional >> > precedence in ghc is pretty minimal. Or at least I remember having a >> > conversation about it a few years ago, and the conclusion was that >> > adding precedence would be super easy to do, but just lacked any good >> > motivating example from real libraries. >> >> I remember this discussion, too, and I guess that it was started by Simon >> Marlow and it ended with recalling that decades ago something more >> advanced was discussed: Groups of equal precedence and relations between >> the groups. But that one was too complicated to be implemented. > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Tue Aug 18 06:54:52 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Tue, 18 Aug 2020 08:54:52 +0200 (CEST) Subject: Null Pointer Pattern In-Reply-To: References: Message-ID: On Mon, 17 Aug 2020, Carter Schonwald wrote: > There’s an interesting idea or two in here.  Like, should we support > having non-nullable pointers? That was my first idea, too. newtype NonNullPtr a = NonNullPtr (Ptr a) maybeNonNull :: Ptr a -> Maybe (NonNullPtr a) Using NonNullPtr in FFI declarations would make the interfaces safer. E.g. dst <- mallocArray n for (maybeNonNull dst) $ \dstNonNull -> copyArray dstNonNull srcNonNull n That seems better than testing for nullPtr and then forgeting this information in the type, again. From carter.schonwald at gmail.com Tue Aug 18 17:18:46 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 18 Aug 2020 13:18:46 -0400 Subject: Null Pointer Pattern In-Reply-To: References: Message-ID: That’s a really nice idea ! On Tue, Aug 18, 2020 at 2:54 AM Henning Thielemann < lemming at henning-thielemann.de> wrote: > > > On Mon, 17 Aug 2020, Carter Schonwald wrote: > > > > > There’s an interesting idea or two in here. Like, should we support > > > having non-nullable pointers? > > > > That was my first idea, too. > > > > newtype NonNullPtr a = NonNullPtr (Ptr a) > > > > maybeNonNull :: Ptr a -> Maybe (NonNullPtr a) > > > > Using NonNullPtr in FFI declarations would make the interfaces safer. > > > > E.g. > > dst <- mallocArray n > > for (maybeNonNull dst) $ \dstNonNull -> > > copyArray dstNonNull srcNonNull n > > > > That seems better than testing for nullPtr and then forgeting this > > information in the type, again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Thu Aug 20 00:01:12 2020 From: david.feuer at gmail.com (David Feuer) Date: Wed, 19 Aug 2020 20:01:12 -0400 Subject: Module naming help Message-ID: I'm really close to making a second release of the compact-sequences package (now with deques!), but I'd like to get a bit of help with module naming/renaming now to try to avoid more painful name changes later, when my package will hopefully have actual users. Currently, I have three "base" implementations, Data.CompactSequence.Stack.Internal Data.CompactSequence.Queue.Internal Data.CompactSequence.Deque.Internal Each of these implements an abstract datatype representing a stack/queue/deque of fixed-size arrays. Additionally, I have "simple" implementations Data.CompactSequence.Stack.Simple Data.CompactSequence.Queue.Simple Data.CompactSequence.Deque.Simple that just use the base implementations with arrays of size 1. These are great as a proof of concept and for testing, but I expect their performance to be quite bad [1]. To solve the problem, I need to write separate datatypes to handle the top of the structure (the first k elements of a stack, or a prefix and suffix of a queue or deque). There are various different approaches to this, with various trade-offs and tuning factors. The first I'd like to implement, for stacks and queues, would look exactly like the base implementations but with elements instead of arrays in the first node. What might I call these? Another option would be to just use linked lists with various length bounds. Yet another would use stacks that look like data Stack6 a = One a | Two a a | ... | Six a a a a a a for various size bounds. But I have no idea how to name any of these things. A second issue: I may eventually want to add alternative base implementations to support O(log n) `length` and significantly more efficient (but still O(n)) splitting, appending, and indexing operations. How might *those* work into the name situation? Finally, I'll eventually want to offer support for unpacked data, probably via backpack. That will involve base implementations that work with ByteArray rather than SmallArray. Names, names, I hate names! 1. In the first node of the data structure, each sequence element is represented as a pointer to a SmallArray of one element. Yuck. The situation improves considerably further in, with the array size doubling in successive nodes. 2. For deques, where code size is already a matter of some concern, this approach seems likely to exacerbate the issue, but a simpler variant should work well enough. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kindaro at gmail.com Thu Aug 20 16:36:31 2020 From: kindaro at gmail.com (Ignat Insarov) Date: Thu, 20 Aug 2020 21:36:31 +0500 Subject: Consider adding `converge` and friends. Message-ID: Hello… This function first appeared _(to my knowledge)_ in [a Stack Overflow answer][1]. I found it useful several times, and eventually [I extended it to a family of 4 derived functions][2]: `converge`, `convergeBy`, `fixp` and `fixpBy`. * `convergeBy` is like `takeWhile` but with a binary predicate. * `converge` cuts a list at a point where it starts to repeat itself. * `fixp` takes the last element. These operations are useful in many practical cases. For example, the Newton-Raphson approximation method: λ r a = \x -> (x + a/x) / 2 λ fixp (r 2) 1 1.414213562373095 Or let us compute the alternating group A₄: λ import qualified Data.List as List λ xs */ ys = fmap (xs !!) ys λ generate1 gens elems = List.union elems [ elem */ gen | elem <- elems, gen <- gens ] λ ε = [0.. 3] λ rota3 = [1, 2, 0, 3] λ rota3' = [0, 2, 3, 1] λ length $ fixp (generate1 [rota3, rota3']) [ε] 12 I propose adding these functions to `Data.List`. [1]: https://stackoverflow.com/a/7443379 [2]: https://stackoverflow.com/q/48353457 From lemming at henning-thielemann.de Thu Aug 20 16:41:49 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Thu, 20 Aug 2020 18:41:49 +0200 (CEST) Subject: Consider adding `converge` and friends. In-Reply-To: References: Message-ID: On Thu, 20 Aug 2020, Ignat Insarov wrote: > This function first appeared _(to my knowledge)_ in [a Stack Overflow > answer][1]. I found it useful several times, and eventually [I extended it to a > family of 4 derived functions][2]: `converge`, `convergeBy`, `fixp` and > `fixpBy`. > > * `convergeBy` is like `takeWhile` but with a binary predicate. > * `converge` cuts a list at a point where it starts to repeat itself. > * `fixp` takes the last element. This one may help: https://hackage.haskell.org/package/utility-ht-0.0.15/docs/Data-List-HT.html#v:mapAdjacent From kindaro at gmail.com Thu Aug 20 17:59:40 2020 From: kindaro at gmail.com (Ignat Insarov) Date: Thu, 20 Aug 2020 22:59:40 +0500 Subject: Consider adding `classify`. Message-ID: Hello. There has been [a question on Stack Overflow][1] asking for a way to group a list by an equivalence relation. Several answers were proposed over time, and I too [have offered a variant][2]. [I also wrote a benchmark.][3] I propose that the function be added to the standard libraries. [1]: https://stackoverflow.com/q/8262179 [2]: https://stackoverflow.com/a/57761458 [3]: https://github.com/kindaro/classify-benchmark.git From godzbanebane at gmail.com Thu Aug 20 19:25:49 2020 From: godzbanebane at gmail.com (Georgi Lyubenov) Date: Thu, 20 Aug 2020 22:25:49 +0300 Subject: Consider adding `classify`. In-Reply-To: References: Message-ID: Hi! I strongly support adding this to base, I personally have had to use something like this quite often. One suggestion I have though is to use [NonEmpty a] as the return type, as NonEmpty is already in base and it is a more accurate type. ====== Georgi -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Thu Aug 20 21:16:38 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Thu, 20 Aug 2020 23:16:38 +0200 (CEST) Subject: exception for invalid byte sequences Message-ID: Documentation for hGetChar et.al. [1] does not mention the exception for invalid byte sequences in the current encoding. It seems it throws an exception called "invalid argument": IO> h <- IO.openFile "/tmp/invalid" IO.ReadMode IO> IO.hGetChar h *** Exception: /tmp/invalid: hGetChar: invalid argument (invalid byte sequence) https://hackage.haskell.org/package/base-4.14.0.0/docs/System-IO.html#v:hGetChar From spam at scientician.net Fri Aug 21 07:29:35 2020 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 21 Aug 2020 09:29:35 +0200 Subject: Consider adding `converge` and friends. In-Reply-To: References: Message-ID: On 20/08/2020 18.36, Ignat Insarov wrote: > Hello… > > This function first appeared _(to my knowledge)_ in [a Stack Overflow > answer][1]. I found it useful several times, and eventually [I extended it to a > family of 4 derived functions][2]: `converge`, `convergeBy`, `fixp` and > `fixpBy`. > > * `convergeBy` is like `takeWhile` but with a binary predicate. > * `converge` cuts a list at a point where it starts to repeat itself. > * `fixp` takes the last element. > FWIW, I've had occasion to have to implement both converge and convergeBy (albeit in Scala), albeit very rarely. My case was one of post-processing a linear list of 'instructions' to remove useless ones -- e.g. adjacement push-pop, etc. Easier to do just do it be repeated application than trying to figure out how to do it in a single pass. +½ from me, I guess :) From andreas.abel at ifi.lmu.de Sun Aug 23 08:50:58 2020 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Sun, 23 Aug 2020 10:50:58 +0200 Subject: Fractional precedences? Re: Operator precedence help In-Reply-To: References: Message-ID: <9d247c07-e343-63f2-df60-15ea0a8009f4@ifi.lmu.de> For Agda, it was very little work to implement fractional precedences: https://github.com/agda/agda/pull/3992 On 2020-08-17 18:12, Carter Schonwald wrote: > Oh yeah! > I feel like everyone’s wondered about that approach. But it definitely > would need some experiments to validate. But in some ways it’d be super > fascinating. > > On Mon, Aug 17, 2020 at 9:40 AM Henning Thielemann > > > wrote: > > > On Sun, 16 Aug 2020, Carter Schonwald wrote: > > > I do think that the work needed to actually support fractional > > precedence in ghc is pretty minimal.  Or at least I remember > having a > > conversation about it a few years ago, and the conclusion was that > >  adding precedence would be super easy to do, but just lacked any > good > > motivating example from real libraries. > > I remember this discussion, too, and I guess that it was started by > Simon > Marlow and it ended with recalling that decades ago something more > advanced was discussed: Groups of equal precedence and relations > between > the groups. But that one was too complicated to be implemented. > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From david.feuer at gmail.com Mon Aug 24 01:35:09 2020 From: david.feuer at gmail.com (David Feuer) Date: Sun, 23 Aug 2020 21:35:09 -0400 Subject: Proposal: plug the space leak in transpose Message-ID: Data.List.transpose, unfortunately, can potentially leak space. The problem is that it walks a list of lists twice: once to get the heads and once to get the tails. Depending on the way the result is consumed, it's possible that heads or tails that are never used will be retained by the garbage collector. I have a fix[*] that probably makes the function slower in typical cases, but that plugs the leak. What do y'all say? * https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3882 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Mon Aug 24 05:25:09 2020 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Mon, 24 Aug 2020 07:25:09 +0200 (CEST) Subject: Proposal: plug the space leak in transpose In-Reply-To: References: Message-ID: On Sun, 23 Aug 2020, David Feuer wrote: > Data.List.transpose, unfortunately, can potentially leak space. The > problem is that it walks a list of lists twice: once to get the heads > and once to get the tails. Depending on the way the result is consumed, > it's possible that heads or tails that are never used will be retained > by the garbage collector. I have a fix[*] that probably makes the > function slower in typical cases, but that plugs the leak. What do y'all > say? Your way sounds more correct than using 'head' and 'tail'. I have no numbers, though. From david.feuer at gmail.com Mon Aug 24 05:36:08 2020 From: david.feuer at gmail.com (David Feuer) Date: Mon, 24 Aug 2020 01:36:08 -0400 Subject: Proposal: plug the space leak in transpose In-Reply-To: References: Message-ID: There's no use of head or tail functions. Two list comprehensions equivalent to two applications of mapMaybe, rather than one of mapMaybe and a second of unzip. The whole situation is rather sad; a generalization of the selector thunk trick could in principle plug the leak without hurting performance, but that will require various GHC changes. On Mon, Aug 24, 2020, 1:25 AM Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Sun, 23 Aug 2020, David Feuer wrote: > > > Data.List.transpose, unfortunately, can potentially leak space. The > > problem is that it walks a list of lists twice: once to get the heads > > and once to get the tails. Depending on the way the result is consumed, > > it's possible that heads or tails that are never used will be retained > > by the garbage collector. I have a fix[*] that probably makes the > > function slower in typical cases, but that plugs the leak. What do y'all > > say? > > Your way sounds more correct than using 'head' and 'tail'. I have no > numbers, though. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.abel at ifi.lmu.de Mon Aug 24 07:39:13 2020 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Mon, 24 Aug 2020 09:39:13 +0200 Subject: Consider adding `classify`. In-Reply-To: References: Message-ID: <8cee1eb1-9fc6-4000-59f2-d89d123d03a7@ifi.lmu.de> This cannot be done efficiently. I'd extend the equivalence relation to a total order and then use sort and group. -1 to adding to the standard libraries. How about publishing this as a (well discoverable) package on hackage and see how popular it gets? On 2020-08-20 19:59, Ignat Insarov wrote: > Hello. > > There has been [a question on Stack Overflow][1] asking for a way to group a > list by an equivalence relation. Several answers were proposed over time, and I > too [have offered a variant][2]. [I also wrote a benchmark.][3] > > I propose that the function be added to the standard libraries. > > [1]: https://stackoverflow.com/q/8262179 > [2]: https://stackoverflow.com/a/57761458 > [3]: https://github.com/kindaro/classify-benchmark.git > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From andreas.abel at ifi.lmu.de Mon Aug 24 07:44:33 2020 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Mon, 24 Aug 2020 09:44:33 +0200 Subject: Consider adding `converge` and friends. In-Reply-To: References: Message-ID: <6907f281-7648-7afb-5b63-337b5dde9b9e@ifi.lmu.de> Here are some related functions for fixpoint computation: http://hackage.haskell.org/package/Agda-2.6.1/docs/Agda-Utils-Function.html On 2020-08-20 18:36, Ignat Insarov wrote: > Hello… > > This function first appeared _(to my knowledge)_ in [a Stack Overflow > answer][1]. I found it useful several times, and eventually [I extended it to a > family of 4 derived functions][2]: `converge`, `convergeBy`, `fixp` and > `fixpBy`. > > * `convergeBy` is like `takeWhile` but with a binary predicate. > * `converge` cuts a list at a point where it starts to repeat itself. > * `fixp` takes the last element. > > These operations are useful in many practical cases. For example, the > Newton-Raphson approximation method: > > λ r a = \x -> (x + a/x) / 2 > λ fixp (r 2) 1 > 1.414213562373095 > > Or let us compute the alternating group A₄: > > λ import qualified Data.List as List > λ xs */ ys = fmap (xs !!) ys > λ generate1 gens elems = List.union elems [ elem */ gen | elem <- > elems, gen <- gens ] > λ ε = [0.. 3] > λ rota3 = [1, 2, 0, 3] > λ rota3' = [0, 2, 3, 1] > λ length $ fixp (generate1 [rota3, rota3']) [ε] > 12 > > I propose adding these functions to `Data.List`. > > [1]: https://stackoverflow.com/a/7443379 > [2]: https://stackoverflow.com/q/48353457 > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From kindaro at gmail.com Mon Aug 24 12:48:44 2020 From: kindaro at gmail.com (Ignat Insarov) Date: Mon, 24 Aug 2020 17:48:44 +0500 Subject: Consider adding `classify`. In-Reply-To: <8cee1eb1-9fc6-4000-59f2-d89d123d03a7@ifi.lmu.de> References: <8cee1eb1-9fc6-4000-59f2-d89d123d03a7@ifi.lmu.de> Message-ID: Andreas, then how do you feel about adding that one? classifyByProjection ∷ Ord π ⇒ (a → π) → [a] → [[a]] classifyByProjection f = List.groupBy ((==) `on` f) . List.sortBy (compare `on` f) Having thought about it, I have come to agree that it is usually not hard to define a suitable projection function. On Mon, 24 Aug 2020 at 12:39, Andreas Abel wrote: > > This cannot be done efficiently. I'd extend the equivalence relation to > a total order and then use sort and group. > > -1 to adding to the standard libraries. How about publishing this as a > (well discoverable) package on hackage and see how popular it gets? > > On 2020-08-20 19:59, Ignat Insarov wrote: > > Hello. > > > > There has been [a question on Stack Overflow][1] asking for a way to group a > > list by an equivalence relation. Several answers were proposed over time, and I > > too [have offered a variant][2]. [I also wrote a benchmark.][3] > > > > I propose that the function be added to the standard libraries. > > > > [1]: https://stackoverflow.com/q/8262179 > > [2]: https://stackoverflow.com/a/57761458 > > [3]: https://github.com/kindaro/classify-benchmark.git > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From vandijk.roel at gmail.com Mon Aug 24 13:20:10 2020 From: vandijk.roel at gmail.com (Roel van Dijk) Date: Mon, 24 Aug 2020 15:20:10 +0200 Subject: Consider adding `classify`. In-Reply-To: References: <8cee1eb1-9fc6-4000-59f2-d89d123d03a7@ifi.lmu.de> Message-ID: I think your function is a possible candidate for the decorate-sort-undecorate paradigm, like sortOn in base [1]. classifyByProjection ∷ Ord π ⇒ (a → π) → [a] → [[a]] classifyByProjection f = (map . map) snd . List.groupBy ((==) `on` fst) . List.sortBy (comparing fst) . map (\x -> let y = f x in y `seq` (y, x)) That way the function f is only evaluated once for each element in the input list. I have not benchmarked this. 1 - https://hackage.haskell.org/package/base-4.14.0.0/docs/Data-List.html#v:sortOn Op ma 24 aug. 2020 om 14:49 schreef Ignat Insarov : > Andreas, then how do you feel about adding that one? > > classifyByProjection ∷ Ord π ⇒ (a → π) → [a] → [[a]] > classifyByProjection f = List.groupBy ((==) `on` f) . List.sortBy > (compare `on` f) > > Having thought about it, I have come to agree that it is usually not > hard to define a suitable projection function. > > On Mon, 24 Aug 2020 at 12:39, Andreas Abel > wrote: > > > > This cannot be done efficiently. I'd extend the equivalence relation to > > a total order and then use sort and group. > > > > -1 to adding to the standard libraries. How about publishing this as a > > (well discoverable) package on hackage and see how popular it gets? > > > > On 2020-08-20 19:59, Ignat Insarov wrote: > > > Hello. > > > > > > There has been [a question on Stack Overflow][1] asking for a way to > group a > > > list by an equivalence relation. Several answers were proposed over > time, and I > > > too [have offered a variant][2]. [I also wrote a benchmark.][3] > > > > > > I propose that the function be added to the standard libraries. > > > > > > [1]: https://stackoverflow.com/q/8262179 > > > [2]: https://stackoverflow.com/a/57761458 > > > [3]: https://github.com/kindaro/classify-benchmark.git > > > _______________________________________________ > > > Libraries mailing list > > > Libraries at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evincarofautumn at gmail.com Thu Aug 27 00:37:08 2020 From: evincarofautumn at gmail.com (Jon Purdy) Date: Wed, 26 Aug 2020 17:37:08 -0700 Subject: clamp function in base In-Reply-To: References: Message-ID: I’m also strongly for ‘clamp :: (Ord a) => (a, a) -> a -> a’. Even if we don’t resolve it now, I do want to mention that, in discussing this with some acquaintances recently, we agreed that one-sided clamps likely warrant a home in ‘Data.Ord’ as well: atLeast :: (Ord a) => a -> a -> a atLeast = max {-# INLINE atLeast #-} atMost :: (Ord a) => a -> a -> a atMost = min {-# INLINE atMost #-} clamp :: (Ord a) => (a, a) -> a -> a clamp (lower, upper) = atLeast lower . atMost upper While their implementations are identical to ‘max’ and ‘min’, semantically they privilege their arguments differently, serving as documentation of intent in code like ‘nonnegative = fmap (atLeast 0)’. The hope is that this may help reduce bugs caused by the common error of mixing up ‘min’ and ‘max’, owing to the unfortunate false friendship between “at least/most” and “the least/most”. On Sun, Aug 16, 2020 at 2:43 AM Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Fri, 14 Aug 2020, Sandy Maguire wrote: > > > It seems to me that base is missing the very standard function `clamp :: > Ord a => a -> a -> a -> a`: > > > > ```haskell > > clamp :: Ord a => a -> a -> a -> a > > clamp low high = min high .max low > > ``` > > > > https://hackage.haskell.org/package/utility-ht-0.0.15/docs/Data-Ord-HT.html#v:limit > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Aug 27 16:05:05 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 27 Aug 2020 12:05:05 -0400 Subject: clamp function in base In-Reply-To: References: Message-ID: actually, if you look at the associated ticket, we have a version of clamp that gives the right way to derive the onesided behaviors for even floating point! (and has the correct / desirable behavior in the presence of NANs! ) On Wed, Aug 26, 2020 at 8:38 PM Jon Purdy wrote: > I’m also strongly for ‘clamp :: (Ord a) => (a, a) -> a -> a’. > > Even if we don’t resolve it now, I do want to mention that, in discussing > this with some acquaintances recently, we agreed that one-sided clamps > likely warrant a home in ‘Data.Ord’ as well: > > atLeast :: (Ord a) => a -> a -> a > atLeast = max > {-# INLINE atLeast #-} > > atMost :: (Ord a) => a -> a -> a > atMost = min > {-# INLINE atMost #-} > > clamp :: (Ord a) => (a, a) -> a -> a > clamp (lower, upper) = atLeast lower . atMost upper > > While their implementations are identical to ‘max’ and ‘min’, semantically > they privilege their arguments differently, serving as documentation of > intent in code like ‘nonnegative = fmap (atLeast 0)’. The hope is that this > may help reduce bugs caused by the common error of mixing up ‘min’ and > ‘max’, owing to the unfortunate false friendship between “at least/most” > and “the least/most”. > > > On Sun, Aug 16, 2020 at 2:43 AM Henning Thielemann < > lemming at henning-thielemann.de> wrote: > >> >> On Fri, 14 Aug 2020, Sandy Maguire wrote: >> >> > It seems to me that base is missing the very standard function `clamp >> :: Ord a => a -> a -> a -> a`: >> > >> > ```haskell >> > clamp :: Ord a => a -> a -> a -> a >> > clamp low high = min high .max low >> > ``` >> >> >> >> https://hackage.haskell.org/package/utility-ht-0.0.15/docs/Data-Ord-HT.html#v:limit >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evincarofautumn at gmail.com Thu Aug 27 18:34:05 2020 From: evincarofautumn at gmail.com (Jon Purdy) Date: Thu, 27 Aug 2020 11:34:05 -0700 Subject: clamp function in base In-Reply-To: References: Message-ID: I see a discussion of how the one-sided clamps can be implemented only for floating-point using NaN, but my concern is the API for Ord types generally. ‘clamp (lo, hi) x = min hi (max x lo)’ allows ‘atLeast lo = clamp (lo, 0/0)’ and ‘atMost hi = clamp (0/0, hi)’, which is excellent for providing reasonable NaN handling, but doesn’t say anything about e.g. integers. It does point to the fact that the correct implementations of ‘atLeast’ and ‘atMost’, for consistency with the correct ‘clamp’ for floats, are actually these: atMost = min atLeast = flip max -- rather than just ‘max’ To me, that’s another argument that these should be considered, since it’s the kind of subtle distinction that libraries should be handling for users, along the same lines as the stability of ‘min’ & ‘max’ (namely: ‘a == b ==> (min a b, max a b) == (a, b)’). Again, this doesn’t necessarily need to go in the same MR, but they are closely related. If they were included, it would be necessary to include a note in the documentation that the correct order is ‘atMost hi . atLeast lo’ if someone is applying them separately, but that ‘clamp’ should be preferred for automatically doing this. I don’t know offhand how this would be disrupted down the line if we changed the ‘Ord’ instance for floats to use the IEEE-754 total ordering, but that should also be considered for all these functions. On Thu, Aug 27, 2020 at 9:05 AM Carter Schonwald wrote: > actually, if you look at the associated ticket, we have a version of clamp > that gives the right way to derive the onesided behaviors for even floating > point! (and has the correct / desirable behavior in the presence of NANs! ) > > On Wed, Aug 26, 2020 at 8:38 PM Jon Purdy > wrote: > >> I’m also strongly for ‘clamp :: (Ord a) => (a, a) -> a -> a’. >> >> Even if we don’t resolve it now, I do want to mention that, in discussing >> this with some acquaintances recently, we agreed that one-sided clamps >> likely warrant a home in ‘Data.Ord’ as well: >> >> atLeast :: (Ord a) => a -> a -> a >> atLeast = max >> {-# INLINE atLeast #-} >> >> atMost :: (Ord a) => a -> a -> a >> atMost = min >> {-# INLINE atMost #-} >> >> clamp :: (Ord a) => (a, a) -> a -> a >> clamp (lower, upper) = atLeast lower . atMost upper >> >> While their implementations are identical to ‘max’ and ‘min’, >> semantically they privilege their arguments differently, serving as >> documentation of intent in code like ‘nonnegative = fmap (atLeast 0)’. The >> hope is that this may help reduce bugs caused by the common error of mixing >> up ‘min’ and ‘max’, owing to the unfortunate false friendship between “at >> least/most” and “the least/most”. >> >> >> On Sun, Aug 16, 2020 at 2:43 AM Henning Thielemann < >> lemming at henning-thielemann.de> wrote: >> >>> >>> On Fri, 14 Aug 2020, Sandy Maguire wrote: >>> >>> > It seems to me that base is missing the very standard function `clamp >>> :: Ord a => a -> a -> a -> a`: >>> > >>> > ```haskell >>> > clamp :: Ord a => a -> a -> a -> a >>> > clamp low high = min high .max low >>> > ``` >>> >>> >>> >>> https://hackage.haskell.org/package/utility-ht-0.0.15/docs/Data-Ord-HT.html#v:limit >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilypi at cohomolo.gy Fri Aug 28 00:03:47 2020 From: emilypi at cohomolo.gy (Emily Pillmore) Date: Fri, 28 Aug 2020 00:03:47 +0000 Subject: [core libraries] Fwd: The Char Kind: proposal In-Reply-To: References: Message-ID: CC'ing @core-libraries-committee so we don't lose this. It's important work! On Fri, Jul 10, 2020 at 12:02 PM, Daniel Rogozin < daniel.rogozin at serokell.io > wrote: > > Greetings, > > > > > I would like to propose and discuss several changes related to the > character kind. Some of those changes were implemented jointly with Rinat > Stryungis, my Serokell teammate. > > > > The purpose of this patch is to provide a possibility of analysing > type-level strings (symbols) as term-level ones. This feature allows users > to implement such programs as type-level parsers. One needs to have > full-fledged support of type-level characters as well as we already have > for strings and numbers. In addition to this functionality, it makes sense > to introduce the set of type-families, counterparts of functions defined > in the Data.Char module in order to work with type-level strings and chars > as usual (more or less). > > > > For more convenience, it’s worth having some of the character-related type > families as built-ins and generating the rest of ones as type synonyms. > > > > > The patch fixes #11342, an issue opened by Alexander Vieth several years > ago. In this patch, we introduced the Char Kind, a kind of type-level > characters, with the additional type-families, type-level counterparts of > functions from the `Data.Char` module. > In contrast to Vieth’s approach, we use the same Char type and don’t > introduce the different `Character` kind. We provide slightly more helpers > to work with the Char kind, see below. > You may take a look at this merge request with proposed updates: https:/ / > gitlab. haskell. org/ ghc/ ghc/ -/ merge_requests/ 3598 ( > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3598 ). > > First of all, we overview the additional type families implemented by us > in this patch. > > > > > type family CmpChar (a :: Char) (b :: Char) :: Ordering > > > > > Comparison of type-level characters, as a type family. A type-level > analogue of the function `compare` specified for characters. > > > > > type family LeqChar (a :: Char) (b :: Char) :: Bool > > > > > This is a type-level comparison of characters as well. `LeqChar` yields a > Boolean value and corresponds to `(<=)`. > > > > > type family ConsSymbol (a :: Char) (b :: Symbol) :: Symbol > > > > > This extends a type-level symbol with a type-level character > > > > > type family UnconsSymbol (a :: Symbol) :: Maybe (Char, Symbol) > > > > > This type family yields type-level `Just` storing the first character of a > symbol and its tail if it is nonempty and `Nothing` otherwise. > > > > > Type-level counterparts of the functions `toUpper`, `toLower`, and > `toTitle` from 'Data.Char'. > > > > > type family ToUpper (a :: Char) :: Char > > > > > type family ToLower (a :: Char) :: Char > > > > > type family ToTitle (a :: Char) :: Char > > > > > > These type families are type-level analogues of the functions `ord` and > `chr` from Data.Char respectively. > > > > > type family CharToNat (a :: Char) :: Nat > > > > > type family NatToChar (a :: Nat) :: Char > > > > > > A type-level analogue of the function `generalCategory` from `Data.Kind`. > > > > > type family GeneralCharCategory (a :: Char) :: GeneralCategory > > > > > > The second group of type families consists of built-in unary predicates. > All of them are based on their corresponding term-level analogues from > `Data.Char`. The precise list is the following one: > > > > > type family IsAlpha (a :: Char) :: Bool > > > > > type family IsAlphaNum (a :: Char) :: Bool > > > > > type family IsControl (a :: Char) :: Bool > > > > > type family IsPrint (a :: Char) :: Bool > > > > > type family IsUpper (a :: Char) :: Bool > > > > > type family IsLower (a :: Char) :: Bool > > > > > type family IsSpace (a :: Char) :: Bool > > > > > type family IsDigit (a :: Char) :: Bool > > > > > type family IsOctDigit (a :: Char) :: Bool > > > > > type family IsHexDigit (a :: Char) :: Bool > > > > > type family IsLetter (a :: Char) :: Bool > > > > > > We also provide several type-level predicates implemented via the > `GeneralCharCategory` type family. > > > > > type IsMark a = IsMarkCategory (GeneralCharCategory a) > > > > > type IsNumber a = IsNumberCategory (GeneralCharCategory a) > > > > > type IsPunctuation a = IsPunctuationCategory (GeneralCharCategory a) > > > > > type IsSymbol a = IsSymbolCategory (GeneralCharCategory a) > > > > > type IsSeparator a = IsSeparatorCategory (GeneralCharCategory a) > > > > > Here is an example of an implementation: > > > > > type IsMark a = IsMarkCategory (GeneralCharCategory a) > > > > > type family IsMarkCategory (c :: GeneralCategory) :: Bool where > > > > IsMarkCategory 'NonSpacingMark       = 'True > > > > IsMarkCategory 'SpacingCombiningMark = 'True > > > > IsMarkCategory 'EnclosingMark        = 'True > > > > IsMarkCategory _                     = 'False > > > > > Built-in type families I described above are supported with the > corresponding definitions and functions in > `compiler/GHC/Builtin/Names.hs`, `compiler/GHC/Builtin/Types.hs`, and > `compiler/GHC/Builtin/Types/Literals.hs`. > > In addition to type families, our patch contain the following updates: > > * > > parsing the 'x' syntax > > > * > > type-checking 'x' :: Char > > > * > > type-checking Refl :: 'x' :~: 'x' > > > * > > Typeable / TypeRep support > > > * > > template-haskell support > > > * > > Haddock related updates > > > * > > tests > > > > > > At the moment, the merge request has some minor imperfections for > polishing and improvement, but we have a prototype of a possible > implementation. > The aim of my email is to receive your feedback on this patch. > > > Kind regards, > Danya Rogozin. > > > > > > -- > You received this message because you are subscribed to the Google Groups > "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to haskell-core-libraries+unsubscribe@ googlegroups. com ( > haskell-core-libraries+unsubscribe at googlegroups.com ). > To view this discussion on the web visit https:/ / groups. google. com/ d/ > msgid/ haskell-core-libraries/ CAD_SdCWxBvh%2BU5k87CxZ3SOxqXC%2BmyvE9oh%3DzhMqV2HWe-vzag%40mail. > gmail. com ( > https://groups.google.com/d/msgid/haskell-core-libraries/CAD_SdCWxBvh%2BU5k87CxZ3SOxqXC%2BmyvE9oh%3DzhMqV2HWe-vzag%40mail.gmail.com?utm_medium=email&utm_source=footer > ). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Aug 31 20:50:05 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 31 Aug 2020 16:50:05 -0400 Subject: The Char Kind: proposal In-Reply-To: References: Message-ID: This is cool! I do think that as ghc works today, these would have to definitely be wired into ghc, and there’s of course the philosophical issue of unicode changing over time. But this would be a good thing to execute on and add to ghc. We already have some baby stuff around strings/symbols at the type levle, and work to support more human lanaguage type level computation sounds legit! On Thu, Aug 27, 2020 at 8:03 PM Emily Pillmore wrote: > CC'ing @core-libraries-committee so we don't lose this. It's important > work! > > > On Fri, Jul 10, 2020 at 12:02 PM, Daniel Rogozin < > daniel.rogozin at serokell.io> wrote: > >> Greetings, >> >> >> I would like to propose and discuss several changes related to the >> character kind. Some of those changes were implemented jointly with Rinat >> Stryungis, my Serokell teammate. >> >> The purpose of this patch is to provide a possibility of analysing >> type-level strings (symbols) as term-level ones. This feature allows users >> to implement such programs as type-level parsers. One needs to have >> full-fledged support of type-level characters as well as we already have >> for strings and numbers. In addition to this functionality, it makes sense >> to introduce the set of type-families, counterparts of functions defined in >> the Data.Char module in order to work with type-level strings and chars as >> usual (more or less). >> >> For more convenience, it’s worth having some of the character-related >> type families as built-ins and generating the rest of ones as type synonyms. >> >> The patch fixes #11342, an issue opened by Alexander Vieth several years >> ago. In this patch, we introduced the Char Kind, a kind of type-level >> characters, with the additional type-families, type-level counterparts of >> functions from the `Data.Char` module. >> In contrast to Vieth’s approach, we use the same Char type and don’t >> introduce the different `Character` kind. We provide slightly more helpers >> to work with the Char kind, see below. >> You may take a look at this merge request with proposed updates: >> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3598. >> >> First of all, we overview the additional type families implemented by us >> in this patch. >> >> type family CmpChar (a :: Char) (b :: Char) :: Ordering >> >> Comparison of type-level characters, as a type family. A type-level >> analogue of the function `compare` specified for characters. >> >> type family LeqChar (a :: Char) (b :: Char) :: Bool >> >> This is a type-level comparison of characters as well. `LeqChar` yields a >> Boolean value and corresponds to `(<=)`. >> >> type family ConsSymbol (a :: Char) (b :: Symbol) :: Symbol >> >> This extends a type-level symbol with a type-level character >> >> type family UnconsSymbol (a :: Symbol) :: Maybe (Char, Symbol) >> >> This type family yields type-level `Just` storing the first character of >> a symbol and its tail if it is nonempty and `Nothing` otherwise. >> >> Type-level counterparts of the functions `toUpper`, `toLower`, and >> `toTitle` from 'Data.Char'. >> >> type family ToUpper (a :: Char) :: Char >> >> type family ToLower (a :: Char) :: Char >> >> type family ToTitle (a :: Char) :: Char >> >> >> These type families are type-level analogues of the functions `ord` and >> `chr` from Data.Char respectively. >> >> type family CharToNat (a :: Char) :: Nat >> >> type family NatToChar (a :: Nat) :: Char >> >> >> A type-level analogue of the function `generalCategory` from `Data.Kind`. >> >> type family GeneralCharCategory (a :: Char) :: GeneralCategory >> >> >> The second group of type families consists of built-in unary predicates. >> All of them are based on their corresponding term-level analogues from >> `Data.Char`. The precise list is the following one: >> >> type family IsAlpha (a :: Char) :: Bool >> >> type family IsAlphaNum (a :: Char) :: Bool >> >> type family IsControl (a :: Char) :: Bool >> >> type family IsPrint (a :: Char) :: Bool >> >> type family IsUpper (a :: Char) :: Bool >> >> type family IsLower (a :: Char) :: Bool >> >> type family IsSpace (a :: Char) :: Bool >> >> type family IsDigit (a :: Char) :: Bool >> >> type family IsOctDigit (a :: Char) :: Bool >> >> type family IsHexDigit (a :: Char) :: Bool >> >> type family IsLetter (a :: Char) :: Bool >> >> >> We also provide several type-level predicates implemented via the >> `GeneralCharCategory` type family. >> >> type IsMark a = IsMarkCategory (GeneralCharCategory a) >> >> type IsNumber a = IsNumberCategory (GeneralCharCategory a) >> >> type IsPunctuation a = IsPunctuationCategory (GeneralCharCategory a) >> >> type IsSymbol a = IsSymbolCategory (GeneralCharCategory a) >> >> type IsSeparator a = IsSeparatorCategory (GeneralCharCategory a) >> >> Here is an example of an implementation: >> >> type IsMark a = IsMarkCategory (GeneralCharCategory a) >> >> type family IsMarkCategory (c :: GeneralCategory) :: Bool where >> >> IsMarkCategory 'NonSpacingMark = 'True >> >> IsMarkCategory 'SpacingCombiningMark = 'True >> >> IsMarkCategory 'EnclosingMark = 'True >> >> IsMarkCategory _ = 'False >> >> Built-in type families I described above are supported with the >> corresponding definitions and functions in `compiler/GHC/Builtin/Names.hs`, >> `compiler/GHC/Builtin/Types.hs`, and >> `compiler/GHC/Builtin/Types/Literals.hs`. >> >> In addition to type families, our patch contain the following updates: >> >> 1. >> >> parsing the 'x' syntax >> 2. >> >> type-checking 'x' :: Char >> 3. >> >> type-checking Refl :: 'x' :~: 'x' >> 4. >> >> Typeable / TypeRep support >> 5. >> >> template-haskell support >> 6. >> >> Haddock related updates >> 7. >> >> tests >> >> >> At the moment, the merge request has some minor imperfections for >> polishing and improvement, but we have a prototype of a possible >> implementation. >> The aim of my email is to receive your feedback on this patch. >> >> Kind regards, >> Danya Rogozin. >> >> >> >> >> >> >> >> >> >> >> -- >> >> >> You received this message because you are subscribed to the Google Groups >> "haskell-core-libraries" group. >> >> >> To unsubscribe from this group and stop receiving emails from it, send an >> email to haskell-core-libraries+unsubscribe at googlegroups.com. >> >> >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/haskell-core-libraries/CAD_SdCWxBvh%2BU5k87CxZ3SOxqXC%2BmyvE9oh%3DzhMqV2HWe-vzag%40mail.gmail.com >> >> . >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: