From twanvl at gmail.com Fri May 1 16:44:20 2015 From: twanvl at gmail.com (Twan van Laarhoven) Date: Fri, 01 May 2015 18:44:20 +0200 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: <5543AD64.6020500@gmail.com> I agree that Num is the place to put this function, with a default implementation. In my mind it is a special combination of (+) and (*), which both live in Num as well. I dislike the name fma, as that is a three letter acronym with no meaning to people who don't do numeric programming. And by putting the function in Num the name would end up in the Prelude. For further bikeshedding: my proposal for a name would mulAdd. But fusedMulAdd or fusedMultiplyAdd would also be fine. Twan On 2015-04-30 00:19, Ken T Takusagawa wrote: > On Wed, 29 Apr 2015, Edward Kmett wrote: > >> Good point. If we wanted to we could push this all the way up to Num given the operations >> involved, and I could see that you could benefit from it there for types that have nothing >> to do with floating point, e.g. modular arithmetic could get away with using a single 'mod'. > > I too advocate this go in Num. The place I anticipate > seeing fma being used is in some polymorphic linear algebra > library, and it is not uncommon (having recently done this > myself) to do linear algebra on things that aren't > RealFloat, e.g., Rational, Complex, or number-theoretic > fields. > > --ken > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From petr.mvd at gmail.com Fri May 1 17:00:55 2015 From: petr.mvd at gmail.com (=?UTF-8?B?UGV0ciBQdWRsw6Fr?=) Date: Fri, 01 May 2015 17:00:55 +0000 Subject: Proposal: liftData for Template Haskell In-Reply-To: <1429269330-sup-7487@sabre> References: <1429269330-sup-7487@sabre> Message-ID: +1 Dne p? 17. 4. 2015 13:21 u?ivatel Edward Z. Yang napsal: > I propose adding the following function to Language.Haskell.TH: > > -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which > -- works for any type with a 'Data' instance. > liftData :: Data a => a -> Q Exp > liftData = dataToExpQ (const Nothing) > > I don't really know which submodule this should come from; > since it uses 'dataToExpQ', you might put it in Language.Haskell.TH.Quote > but arguably 'dataToExpQ' doesn't belong in this module either, > and it only lives there because it is a useful function for defining > quasiquoters and it was described in the quasiquoting paper. > > I might propose getting rid of the 'Lift' class entirely, but you > might prefer that class since it doesn't go through SYB (and have > the attendant slowdown). > > This mode of use of 'dataToExpQ' deserves more attention. > > Discussion period: 1 month > > Cheers, > Edward > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Fri May 1 17:07:35 2015 From: amindfv at gmail.com (amindfv at gmail.com) Date: Fri, 1 May 2015 13:07:35 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <5543AD64.6020500@gmail.com> References: <5543AD64.6020500@gmail.com> Message-ID: <43ACFF03-4B39-4FB8-9CB3-69607746C382@gmail.com> +1 for "mulAdd". The "fused" would be a misnomer if there's a default implementation. Tom El May 1, 2015, a las 12:44, Twan van Laarhoven escribi?: > I agree that Num is the place to put this function, with a default implementation. In my mind it is a special combination of (+) and (*), which both live in Num as well. > > I dislike the name fma, as that is a three letter acronym with no meaning to people who don't do numeric programming. And by putting the function in Num the name would end up in the Prelude. > > For further bikeshedding: my proposal for a name would mulAdd. But fusedMulAdd or fusedMultiplyAdd would also be fine. > > > Twan > > On 2015-04-30 00:19, Ken T Takusagawa wrote: >> On Wed, 29 Apr 2015, Edward Kmett wrote: >> >>> Good point. If we wanted to we could push this all the way up to Num given the operations >>> involved, and I could see that you could benefit from it there for types that have nothing >>> to do with floating point, e.g. modular arithmetic could get away with using a single 'mod'. >> >> I too advocate this go in Num. The place I anticipate >> seeing fma being used is in some polymorphic linear algebra >> library, and it is not uncommon (having recently done this >> myself) to do linear algebra on things that aren't >> RealFloat, e.g., Rational, Complex, or number-theoretic >> fields. >> >> --ken >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From ezyang at mit.edu Fri May 1 17:09:20 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 May 2015 10:09:20 -0700 Subject: Template Haskell changes to names and package keys Message-ID: <1430500118-sup-420@sabre> In GHC 7.10, we changed the internal representation of names to be based on package keys (base_XXXXXX) rather than package IDs (base-4.7.0.1), however, we forgot to update the Template Haskell API to track these changes. This lead to some bugs in TH code which was synthesizing names by using package name and version directly, e.g. https://ghc.haskell.org/trac/ghc/ticket/10279 We now propose the following changes to the TH API in order to track these changes: 1. Currently, a TH NameG contains a PkgName, defined as: newtype PkgName = PkgName String This is badly misleading, even in the old world order, since these needed version numbers as well. We propose that this be renamed to PkgKey: newtype PkgKey = PkgKey String mkPackageKey :: String -> PackageKey mkPackageKey = PkgKey 2. Package keys are somewhat hard to synthesize, so we also offer an API for querying the package database of the GHC which is compiling your code for information about packages. So, we introduce a new abstract data type: data Package packageKey :: Package -> PkgKey and some functions for getting packages: searchPackage :: String -- Package name -> String -- Version -> Q [Package] reifyPackage :: PkgKey -> Q Package We could add other functions (e.g., return all packages with a package name). 3. Commonly, a user wants to get the package key of the current package. Following Simon's suggestion, this will be done by augmenting ModuleInfo: data ModuleInfo = ModuleInfo { mi_this_mod :: Module -- new , mi_imports :: [Module] } We'll also add a function for accessing the module package key: modulePackageKey :: Module -> PkgKey And a convenience function for accessing the current module: thisPackageKey :: Q PkgKey thisPackageKey = fmap (modulePackageKey . mi_this_mod) qReifyModule thisPackage :: Q Package thisPackage = reifyPackage =<< thisPackageKey Discussion period: 1 month Thanks, Edward (apologies to cc'd folks, I sent from my wrong email address) From vogt.adam at gmail.com Fri May 1 17:35:34 2015 From: vogt.adam at gmail.com (adam vogt) Date: Fri, 1 May 2015 13:35:34 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <5543AD64.6020500@gmail.com> References: <5543AD64.6020500@gmail.com> Message-ID: The Num class is defined in GHC.Num, so Prelude could import GHC.Num hiding (fma) to avoid having another round of prelude changes breaking code. On Fri, May 1, 2015 at 12:44 PM, Twan van Laarhoven wrote: > I agree that Num is the place to put this function, with a default > implementation. In my mind it is a special combination of (+) and (*), > which both live in Num as well. > > I dislike the name fma, as that is a three letter acronym with no meaning > to people who don't do numeric programming. And by putting the function in > Num the name would end up in the Prelude. > > For further bikeshedding: my proposal for a name would mulAdd. But > fusedMulAdd or fusedMultiplyAdd would also be fine. > > > Twan > > > On 2015-04-30 00:19, Ken T Takusagawa wrote: > >> On Wed, 29 Apr 2015, Edward Kmett wrote: >> >> Good point. If we wanted to we could push this all the way up to Num >>> given the operations >>> involved, and I could see that you could benefit from it there for types >>> that have nothing >>> to do with floating point, e.g. modular arithmetic could get away with >>> using a single 'mod'. >>> >> >> I too advocate this go in Num. The place I anticipate >> seeing fma being used is in some polymorphic linear algebra >> library, and it is not uncommon (having recently done this >> myself) to do linear algebra on things that aren't >> RealFloat, e.g., Rational, Complex, or number-theoretic >> fields. >> >> --ken >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri May 1 17:52:36 2015 From: david.feuer at gmail.com (David Feuer) Date: Fri, 1 May 2015 13:52:36 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <5543AD64.6020500@gmail.com> Message-ID: I'm somewhat opposed to the Num class in general, and very much opposed to calling floating point representations "numbers" in particular. How are they numbers when they don't obey associative or distributive laws, let alone cancellation, commutativity, ....? I know Carter disagrees with me, but I'll stand my ground, resolute! I suppose adding some more nonsense into the trash heap won't do too much more harm, but I'd much rather see some deeper thought about how we want to deal with floating point. On May 1, 2015 1:35 PM, "adam vogt" wrote: > The Num class is defined in GHC.Num, so Prelude could import GHC.Num > hiding (fma) to avoid having another round of prelude changes breaking code. > > > > On Fri, May 1, 2015 at 12:44 PM, Twan van Laarhoven > wrote: > >> I agree that Num is the place to put this function, with a default >> implementation. In my mind it is a special combination of (+) and (*), >> which both live in Num as well. >> >> I dislike the name fma, as that is a three letter acronym with no meaning >> to people who don't do numeric programming. And by putting the function in >> Num the name would end up in the Prelude. >> >> For further bikeshedding: my proposal for a name would mulAdd. But >> fusedMulAdd or fusedMultiplyAdd would also be fine. >> >> >> Twan >> >> >> On 2015-04-30 00:19, Ken T Takusagawa wrote: >> >>> On Wed, 29 Apr 2015, Edward Kmett wrote: >>> >>> Good point. If we wanted to we could push this all the way up to Num >>>> given the operations >>>> involved, and I could see that you could benefit from it there for >>>> types that have nothing >>>> to do with floating point, e.g. modular arithmetic could get away with >>>> using a single 'mod'. >>>> >>> >>> I too advocate this go in Num. The place I anticipate >>> seeing fma being used is in some polymorphic linear algebra >>> library, and it is not uncommon (having recently done this >>> myself) to do linear algebra on things that aren't >>> RealFloat, e.g., Rational, Complex, or number-theoretic >>> fields. >>> >>> --ken >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >>> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tikhon at jelv.is Fri May 1 18:00:44 2015 From: tikhon at jelv.is (Tikhon Jelvis) Date: Fri, 1 May 2015 11:00:44 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <5543AD64.6020500@gmail.com> Message-ID: Would it make sense to create a new class for operations like fma that has accuracy guarantees as part of its typeclass laws? Or would managing a bunch of typeclasses like that create too much syntactic, conceptual or performance overhead for actual use? To me, that seems like it could be better than polluting Num?which, after all, features prominently in the Prelude?but it might make for worse discoverability. If we do add it to Num, I strongly support having a default implementation. We don't want to make implementing a custom numeric type any more difficult than it has to be, and somebody unfamiliar with fma would just manually implement it without any optimizations anyhow or just leave it out, incomplete instantiation warnings nonwithstanding. Num is already a bit to big for casual use (I rarely care about signum and abs myself), so making it *bigger* is not appealing. Personally, I'm a bit torn on the naming. Something like mulAdd or fusedMultiplyAdd is great for non-experts, but it feels like fma is something that we only expect experts to care about, so perhaps it's better to name it in line with their expectations. On Fri, May 1, 2015 at 10:52 AM, David Feuer wrote: > I'm somewhat opposed to the Num class in general, and very much opposed to > calling floating point representations "numbers" in particular. How are > they numbers when they don't obey associative or distributive laws, let > alone cancellation, commutativity, ....? I know Carter disagrees with me, > but I'll stand my ground, resolute! I suppose adding some more nonsense > into the trash heap won't do too much more harm, but I'd much rather see > some deeper thought about how we want to deal with floating point. > On May 1, 2015 1:35 PM, "adam vogt" wrote: > >> The Num class is defined in GHC.Num, so Prelude could import GHC.Num >> hiding (fma) to avoid having another round of prelude changes breaking code. >> >> >> >> On Fri, May 1, 2015 at 12:44 PM, Twan van Laarhoven >> wrote: >> >>> I agree that Num is the place to put this function, with a default >>> implementation. In my mind it is a special combination of (+) and (*), >>> which both live in Num as well. >>> >>> I dislike the name fma, as that is a three letter acronym with no >>> meaning to people who don't do numeric programming. And by putting the >>> function in Num the name would end up in the Prelude. >>> >>> For further bikeshedding: my proposal for a name would mulAdd. But >>> fusedMulAdd or fusedMultiplyAdd would also be fine. >>> >>> >>> Twan >>> >>> >>> On 2015-04-30 00:19, Ken T Takusagawa wrote: >>> >>>> On Wed, 29 Apr 2015, Edward Kmett wrote: >>>> >>>> Good point. If we wanted to we could push this all the way up to Num >>>>> given the operations >>>>> involved, and I could see that you could benefit from it there for >>>>> types that have nothing >>>>> to do with floating point, e.g. modular arithmetic could get away with >>>>> using a single 'mod'. >>>>> >>>> >>>> I too advocate this go in Num. The place I anticipate >>>> seeing fma being used is in some polymorphic linear algebra >>>> library, and it is not uncommon (having recently done this >>>> myself) to do linear algebra on things that aren't >>>> RealFloat, e.g., Rational, Complex, or number-theoretic >>>> fields. >>>> >>>> --ken >>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>> >>>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Fri May 1 18:11:52 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 1 May 2015 18:11:52 +0000 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <5543AD64.6020500@gmail.com> Message-ID: On Fri, May 1, 2015 at 5:52 PM, David Feuer wrote: > I'm somewhat opposed to the Num class in general, and very much opposed to > calling floating point representations "numbers" in particular. How are > they numbers when they don't obey associative or distributive laws, let > alone cancellation, commutativity, ....? I know Carter > TBH I think Num is a lost cause. If you want mathematical numbers, set up a parallel class instead of trying to force a class designed for numbers "in the wild" to be a pure theory class. This operation in particular is *all about* numbers in the wild --- it has no place in theory, it's an optimization for hardware implementations. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From danburton.email at gmail.com Fri May 1 18:54:05 2015 From: danburton.email at gmail.com (Dan Burton) Date: Fri, 1 May 2015 11:54:05 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430500118-sup-420@sabre> References: <1430500118-sup-420@sabre> Message-ID: Can you give (or link to) the rationale for the internal representation change? What use cases (or new corner cases for ghc-7.10) justify the added complexity here? -- Dan Burton On Fri, May 1, 2015 at 10:09 AM, Edward Z. Yang wrote: > In GHC 7.10, we changed the internal representation of names to > be based on package keys (base_XXXXXX) rather than package IDs > (base-4.7.0.1), however, we forgot to update the Template Haskell > API to track these changes. This lead to some bugs in TH > code which was synthesizing names by using package name and > version directly, e.g. https://ghc.haskell.org/trac/ghc/ticket/10279 > > We now propose the following changes to the TH API in order to > track these changes: > > 1. Currently, a TH NameG contains a PkgName, defined as: > > newtype PkgName = PkgName String > > This is badly misleading, even in the old world order, since > these needed version numbers as well. We propose that this be > renamed to PkgKey: > > newtype PkgKey = PkgKey String > mkPackageKey :: String -> PackageKey > mkPackageKey = PkgKey > > 2. Package keys are somewhat hard to synthesize, so we also > offer an API for querying the package database of the GHC which > is compiling your code for information about packages. So, > we introduce a new abstract data type: > > data Package > packageKey :: Package -> PkgKey > > and some functions for getting packages: > > searchPackage :: String -- Package name > -> String -- Version > -> Q [Package] > > reifyPackage :: PkgKey -> Q Package > > We could add other functions (e.g., return all packages with a > package name). > > 3. Commonly, a user wants to get the package key of the current > package. Following Simon's suggestion, this will be done by > augmenting ModuleInfo: > > data ModuleInfo = > ModuleInfo { mi_this_mod :: Module -- new > , mi_imports :: [Module] } > > We'll also add a function for accessing the module package key: > > modulePackageKey :: Module -> PkgKey > > And a convenience function for accessing the current module: > > thisPackageKey :: Q PkgKey > thisPackageKey = fmap (modulePackageKey . mi_this_mod) qReifyModule > > thisPackage :: Q Package > thisPackage = reifyPackage =<< thisPackageKey > > Discussion period: 1 month > > Thanks, > Edward > > (apologies to cc'd folks, I sent from my wrong email address) > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri May 1 20:08:33 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 May 2015 13:08:33 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: References: <1430500118-sup-420@sabre> Message-ID: <1430510852-sup-8622@sabre> Hello Dan, https://ghc.haskell.org/trac/ghc/ticket/9265 I think summarizes the reason why we shifted to this. It's also important infrastructural groundwork for a more sophisticated module system in Haskell. Edward Excerpts from Dan Burton's message of 2015-05-01 11:54:05 -0700: > Can you give (or link to) the rationale for the internal representation > change? What use cases (or new corner cases for ghc-7.10) justify the added > complexity here? > > -- Dan Burton > > On Fri, May 1, 2015 at 10:09 AM, Edward Z. Yang wrote: > > > In GHC 7.10, we changed the internal representation of names to > > be based on package keys (base_XXXXXX) rather than package IDs > > (base-4.7.0.1), however, we forgot to update the Template Haskell > > API to track these changes. This lead to some bugs in TH > > code which was synthesizing names by using package name and > > version directly, e.g. https://ghc.haskell.org/trac/ghc/ticket/10279 > > > > We now propose the following changes to the TH API in order to > > track these changes: > > > > 1. Currently, a TH NameG contains a PkgName, defined as: > > > > newtype PkgName = PkgName String > > > > This is badly misleading, even in the old world order, since > > these needed version numbers as well. We propose that this be > > renamed to PkgKey: > > > > newtype PkgKey = PkgKey String > > mkPackageKey :: String -> PackageKey > > mkPackageKey = PkgKey > > > > 2. Package keys are somewhat hard to synthesize, so we also > > offer an API for querying the package database of the GHC which > > is compiling your code for information about packages. So, > > we introduce a new abstract data type: > > > > data Package > > packageKey :: Package -> PkgKey > > > > and some functions for getting packages: > > > > searchPackage :: String -- Package name > > -> String -- Version > > -> Q [Package] > > > > reifyPackage :: PkgKey -> Q Package > > > > We could add other functions (e.g., return all packages with a > > package name). > > > > 3. Commonly, a user wants to get the package key of the current > > package. Following Simon's suggestion, this will be done by > > augmenting ModuleInfo: > > > > data ModuleInfo = > > ModuleInfo { mi_this_mod :: Module -- new > > , mi_imports :: [Module] } > > > > We'll also add a function for accessing the module package key: > > > > modulePackageKey :: Module -> PkgKey > > > > And a convenience function for accessing the current module: > > > > thisPackageKey :: Q PkgKey > > thisPackageKey = fmap (modulePackageKey . mi_this_mod) qReifyModule > > > > thisPackage :: Q Package > > thisPackage = reifyPackage =<< thisPackageKey > > > > Discussion period: 1 month > > > > Thanks, > > Edward > > > > (apologies to cc'd folks, I sent from my wrong email address) > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > From danburton.email at gmail.com Fri May 1 22:05:18 2015 From: danburton.email at gmail.com (Dan Burton) Date: Fri, 1 May 2015 15:05:18 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430510852-sup-8622@sabre> References: <1430500118-sup-420@sabre> <1430510852-sup-8622@sabre> Message-ID: Thanks Edward. #3 looks like an obvious win. #1 looks like harmless bikeshedding. (Keep the old mkPkgName and pkgString and `type PkgName = PkgKey` as synonyms for a deprecation cycle or two?). As for #2, searchPackage struck me as odd at first. What could a TH user do with a list of packages that are all "the same version"? As specified, it's rather opaque. However, it occurs to me that if the Package data type has some information about, for example, the [Package] that it was built against, then perhaps some useful things could be done via TH with this information. So upon second look, it seems to me that the proposed points are really not all that complex. +1 from me, fwiw -- Dan Burton On Fri, May 1, 2015 at 1:08 PM, Edward Z. Yang wrote: > Hello Dan, > > https://ghc.haskell.org/trac/ghc/ticket/9265 I think summarizes > the reason why we shifted to this. It's also important infrastructural > groundwork for a more sophisticated module system in Haskell. > > Edward > > Excerpts from Dan Burton's message of 2015-05-01 11:54:05 -0700: > > Can you give (or link to) the rationale for the internal representation > > change? What use cases (or new corner cases for ghc-7.10) justify the > added > > complexity here? > > > > -- Dan Burton > > > > On Fri, May 1, 2015 at 10:09 AM, Edward Z. Yang wrote: > > > > > In GHC 7.10, we changed the internal representation of names to > > > be based on package keys (base_XXXXXX) rather than package IDs > > > (base-4.7.0.1), however, we forgot to update the Template Haskell > > > API to track these changes. This lead to some bugs in TH > > > code which was synthesizing names by using package name and > > > version directly, e.g. https://ghc.haskell.org/trac/ghc/ticket/10279 > > > > > > We now propose the following changes to the TH API in order to > > > track these changes: > > > > > > 1. Currently, a TH NameG contains a PkgName, defined as: > > > > > > newtype PkgName = PkgName String > > > > > > This is badly misleading, even in the old world order, since > > > these needed version numbers as well. We propose that this be > > > renamed to PkgKey: > > > > > > newtype PkgKey = PkgKey String > > > mkPackageKey :: String -> PackageKey > > > mkPackageKey = PkgKey > > > > > > 2. Package keys are somewhat hard to synthesize, so we also > > > offer an API for querying the package database of the GHC which > > > is compiling your code for information about packages. So, > > > we introduce a new abstract data type: > > > > > > data Package > > > packageKey :: Package -> PkgKey > > > > > > and some functions for getting packages: > > > > > > searchPackage :: String -- Package name > > > -> String -- Version > > > -> Q [Package] > > > > > > reifyPackage :: PkgKey -> Q Package > > > > > > We could add other functions (e.g., return all packages with a > > > package name). > > > > > > 3. Commonly, a user wants to get the package key of the current > > > package. Following Simon's suggestion, this will be done by > > > augmenting ModuleInfo: > > > > > > data ModuleInfo = > > > ModuleInfo { mi_this_mod :: Module -- new > > > , mi_imports :: [Module] } > > > > > > We'll also add a function for accessing the module package key: > > > > > > modulePackageKey :: Module -> PkgKey > > > > > > And a convenience function for accessing the current module: > > > > > > thisPackageKey :: Q PkgKey > > > thisPackageKey = fmap (modulePackageKey . mi_this_mod) > qReifyModule > > > > > > thisPackage :: Q Package > > > thisPackage = reifyPackage =<< thisPackageKey > > > > > > Discussion period: 1 month > > > > > > Thanks, > > > Edward > > > > > > (apologies to cc'd folks, I sent from my wrong email address) > > > _______________________________________________ > > > Libraries mailing list > > > Libraries at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgsloan at gmail.com Fri May 1 23:12:30 2015 From: mgsloan at gmail.com (Michael Sloan) Date: Fri, 1 May 2015 16:12:30 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430500118-sup-420@sabre> References: <1430500118-sup-420@sabre> Message-ID: +1 on #2 and #3. Being able to get all the packages that have a particular name sounds good! Note that if "-hide-package" / "-hide-all-packages" is used, I wouldn't want those packages in the results. Considering that we also have the Module / ModuleInfo datatypes, to me it makes sense that we'd also be able to get a [Module] from a Package. Like Dan, I'm a little less keen on #1. Here's why: * PkgName is in the internal "Language.Haskell.TH.Syntax" module * It isn't actually documented what it means, so it's meaning can be freely bent. There's no guarantee that you should be able to generate them. * I couldn't find any examples of its usage that would be broken by this new semantics. I've done a search on github[1] to find usages of PkgName, and I only found one use[2] that uses PkgName which would be affected by this change. This example is in the TH lib itself, and so would hopefully be fixed by this change. On the other hand, since it's rarely used, such an API breakage wouldn't be that impactful, so it's not that big of a deal. [1] https://github.com/search?p=1&q=language%3Ahaskell+%22Language.Haskell.TH%22+%22PkgName%22+&ref=searchresults&type=Code&utf8=%E2%9C%93 [2] https://github.com/phischu/fragnix/blob/fc98a1c6c486440ed047c8b630eb4e08041f52e4/tests/packages/scotty/Language.Haskell.TH.Quote.hs#L36 On Fri, May 1, 2015 at 10:09 AM, Edward Z. Yang wrote: > In GHC 7.10, we changed the internal representation of names to > be based on package keys (base_XXXXXX) rather than package IDs > (base-4.7.0.1), however, we forgot to update the Template Haskell > API to track these changes. This lead to some bugs in TH > code which was synthesizing names by using package name and > version directly, e.g. https://ghc.haskell.org/trac/ghc/ticket/10279 > > We now propose the following changes to the TH API in order to > track these changes: > > 1. Currently, a TH NameG contains a PkgName, defined as: > > newtype PkgName = PkgName String > > This is badly misleading, even in the old world order, since > these needed version numbers as well. We propose that this be > renamed to PkgKey: > > newtype PkgKey = PkgKey String > mkPackageKey :: String -> PackageKey > mkPackageKey = PkgKey > > 2. Package keys are somewhat hard to synthesize, so we also > offer an API for querying the package database of the GHC which > is compiling your code for information about packages. So, > we introduce a new abstract data type: > > data Package > packageKey :: Package -> PkgKey > > and some functions for getting packages: > > searchPackage :: String -- Package name > -> String -- Version > -> Q [Package] > > reifyPackage :: PkgKey -> Q Package > > We could add other functions (e.g., return all packages with a > package name). > > 3. Commonly, a user wants to get the package key of the current > package. Following Simon's suggestion, this will be done by > augmenting ModuleInfo: > > data ModuleInfo = > ModuleInfo { mi_this_mod :: Module -- new > , mi_imports :: [Module] } > > We'll also add a function for accessing the module package key: > > modulePackageKey :: Module -> PkgKey > > And a convenience function for accessing the current module: > > thisPackageKey :: Q PkgKey > thisPackageKey = fmap (modulePackageKey . mi_this_mod) qReifyModule > > thisPackage :: Q Package > thisPackage = reifyPackage =<< thisPackageKey > > Discussion period: 1 month > > Thanks, > Edward > > (apologies to cc'd folks, I sent from my wrong email address) > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Sat May 2 00:45:24 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 May 2015 17:45:24 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: References: <1430500118-sup-420@sabre> <1430510852-sup-8622@sabre> Message-ID: <1430527268-sup-8867@sabre> Excerpts from Dan Burton's message of 2015-05-01 15:05:18 -0700: > #1 looks like harmless bikeshedding. (Keep the old mkPkgName and pkgString > and `type PkgName = PkgKey` as synonyms for a deprecation cycle or two?). It's not that harmless: there really was a semantic change, and most users of mkPkgName did, in fact, break when GHC 7.10 was released (this change was prompted by two separate reports of TH code which was synthesizing these IDs breaking.) > As for #2, searchPackage struck me as odd at first. What could a TH user do > with a list of packages that are all "the same version"? As specified, it's > rather opaque. However, it occurs to me that if the Package data type has > some information about, for example, the [Package] that it was built > against, then perhaps some useful things could be done via TH with this > information. Yes. Admittedly, I have a hard time thinking of uses for this function, but it doesn't cost us too much to add. Edward From ezyang at mit.edu Sat May 2 00:48:43 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 May 2015 17:48:43 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: References: <1430500118-sup-420@sabre> Message-ID: <1430527532-sup-7880@sabre> > Being able to get all the packages that have a particular name sounds > good! Note that if "-hide-package" / "-hide-all-packages" is used, I > wouldn't want those packages in the results. This is not entirely clear; you could imagine a package also coming with an 'exposed' bit, so you could filter for exposed/not exposed pacakges. > Like Dan, I'm a little less keen on #1. Here's why: > > * PkgName is in the internal "Language.Haskell.TH.Syntax" module > > * It isn't actually documented what it means, so it's meaning can be freely > bent. There's no guarantee that you should be able to generate them. > > * I couldn't find any examples of its usage that would be broken by this > new semantics. I've done a search on github[1] to find usages of PkgName, > and I only found one use[2] that uses PkgName which would be affected by > this change. This example is in the TH lib itself, and so would hopefully > be fixed by this change. > > On the other hand, since it's rarely used, such an API breakage wouldn't be > that impactful, so it's not that big of a deal. Actually, this proposal was prompted by two separate reports of breakage: https://github.com/ekmett/lens/issues/496 https://ghc.haskell.org/trac/ghc/ticket/10279 So, like it or not, people seem to be depending on this API, which means it's worth at least discussing a little when we change it :) Cheers, Edward From mgsloan at gmail.com Sat May 2 01:05:49 2015 From: mgsloan at gmail.com (Michael Sloan) Date: Fri, 1 May 2015 18:05:49 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430527532-sup-7880@sabre> References: <1430500118-sup-420@sabre> <1430527532-sup-7880@sabre> Message-ID: On Fri, May 1, 2015 at 5:48 PM, Edward Z. Yang wrote: > > Being able to get all the packages that have a particular name sounds > > good! Note that if "-hide-package" / "-hide-all-packages" is used, I > > wouldn't want those packages in the results. > > This is not entirely clear; you could imagine a package also coming with > an 'exposed' bit, so you could filter for exposed/not exposed pacakges. > > > Like Dan, I'm a little less keen on #1. Here's why: > > > > * PkgName is in the internal "Language.Haskell.TH.Syntax" module > > > > * It isn't actually documented what it means, so it's meaning can be > freely > > bent. There's no guarantee that you should be able to generate them. > > > > * I couldn't find any examples of its usage that would be broken by this > > new semantics. I've done a search on github[1] to find usages of > PkgName, > > and I only found one use[2] that uses PkgName which would be affected by > > this change. This example is in the TH lib itself, and so would > hopefully > > be fixed by this change. > > > > On the other hand, since it's rarely used, such an API breakage wouldn't > be > > that impactful, so it's not that big of a deal. > > Actually, this proposal was prompted by two separate reports of > breakage: > > https://github.com/ekmett/lens/issues/496 > https://ghc.haskell.org/trac/ghc/ticket/10279 > > So, like it or not, people seem to be depending on this API, which means > it's worth at least discussing a little when we change it :) > Hmm, while that is a rather exotic circumstance (building a stage1 cross compiler), it is legitimate. Also, it occurs to me that one benefit of breaking compatibility is not only for old code, but also future code. This will ensure that authors of new TH code are aware of this distinction if the attempt to build on older GHCs. So, I don't really feel strongly about it, and changing PkgKey seems fine. TH doesn't need to be the stablest of APIs. > Cheers, > Edward > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Sat May 2 01:24:09 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 May 2015 18:24:09 -0700 Subject: Proposal: NF newtype In-Reply-To: <5538045A.6020602@ro-che.info> References: <1423557418-sup-9734@sabre> <1429645452-sup-3608@sabre> <1429689481-sup-2618@sabre> <1429728818-sup-3383@sabre> <5538045A.6020602@ro-che.info> Message-ID: <1430529581-sup-2908@sabre> I just realized that the thread of this conversation was not fully finished. For Show/Read, there are two flex points: (1) Should `show (mkNF 2)` output "mkNF 2" or "UnsafeNF 2"? I have a preference for the former but then we need to handwrite the instance. We could also change the definition of NF to just name its constructor NF and export a helper function unsafeMkNF (2) Should `read "mkNF 2"` execute rnf on the result of the inner Read instance? If the answer is no, we can use the default; if the answer is yes, we want an instance like: instance (NFData a, Read a) => Read (NF a) where readsPrec = parens . prec 10 $ do Ident "makeNF" <- lexP m <- step readPrec return (m `deepseq` UnsafeNF m) I lean towards a safe by default API, with an unsafeReadNF, but maybe if default Read instances are not lazy, we should be OK (e.g. (read $ show (repeat 1)) :: [Int] hangs; it isn't identity). Edward Excerpts from Roman Cheplyaka's message of 2015-04-22 13:28:10 -0700: > On 22/04/15 21:54, Edward Z. Yang wrote: > > But it is an interesting question whether or not 'UnsafeNF' should be > > used, since the value read in is known to be in normal form. > > Is it? > > newtype X = X Int > deriving Show > instance Read X where > readsPrec n = map (first $ X . trace "eval") . readsPrec n > > > length (read "[1,2,3]" :: [X]) > 3 > > read "[1,2,3]" :: [X] > [X eval > 1,X eval > 2,X eval > 3] > > Or did you mean something else? > > > Excerpts from Henning Thielemann's message of 2015-04-22 19:49:56 +0100: > >> > >> On Wed, 22 Apr 2015, Dan Burton wrote: > >> > >>> A hand-written read makes more sense to me in this case: > >>> read = makeNF . read > >>> show = show . getNF > >> > >> Show and Read instances should process Strings representing Haskell code, > >> and I guess, Haskell code with the same type as the represented value. > >> Thus the NF should be part of the formatted value. > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > From ivan.miljenovic at gmail.com Sat May 2 02:01:22 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Sat, 2 May 2015 12:01:22 +1000 Subject: Proposal: NF newtype In-Reply-To: <1430529581-sup-2908@sabre> References: <1423557418-sup-9734@sabre> <1429645452-sup-3608@sabre> <1429689481-sup-2618@sabre> <1429728818-sup-3383@sabre> <5538045A.6020602@ro-che.info> <1430529581-sup-2908@sabre> Message-ID: On 2 May 2015 at 11:24, Edward Z. Yang wrote: > I just realized that the thread of this conversation was not fully > finished. > > For Show/Read, there are two flex points: > > (1) Should `show (mkNF 2)` output "mkNF 2" or "UnsafeNF 2"? > I have a preference for the former but then we need to handwrite > the instance. We could also change the definition of NF to just > name its constructor NF and export a helper function unsafeMkNF I think it should output "mkNF 2", unless you're going to be exporting the constructor for people to use. > > (2) Should `read "mkNF 2"` execute rnf on the result of the inner > Read instance? If the answer is no, we can use the default; if > the answer is yes, we want an instance like: > > instance (NFData a, Read a) => Read (NF a) where > readsPrec = parens . prec 10 $ do > Ident "makeNF" <- lexP > m <- step readPrec > return (m `deepseq` UnsafeNF m) I think this is the right approach, though maybe you should just use "makeNF m" internally rather than copying it's definition in the last line. > > I lean towards a safe by default API, with an unsafeReadNF, but > maybe if default Read instances are not lazy, we should be OK > (e.g. (read $ show (repeat 1)) :: [Int] hangs; it isn't identity). > > Edward > > Excerpts from Roman Cheplyaka's message of 2015-04-22 13:28:10 -0700: >> On 22/04/15 21:54, Edward Z. Yang wrote: >> > But it is an interesting question whether or not 'UnsafeNF' should be >> > used, since the value read in is known to be in normal form. >> >> Is it? >> >> newtype X = X Int >> deriving Show >> instance Read X where >> readsPrec n = map (first $ X . trace "eval") . readsPrec n >> >> > length (read "[1,2,3]" :: [X]) >> 3 >> > read "[1,2,3]" :: [X] >> [X eval >> 1,X eval >> 2,X eval >> 3] >> >> Or did you mean something else? >> >> > Excerpts from Henning Thielemann's message of 2015-04-22 19:49:56 +0100: >> >> >> >> On Wed, 22 Apr 2015, Dan Burton wrote: >> >> >> >>> A hand-written read makes more sense to me in this case: >> >>> read = makeNF . read >> >>> show = show . getNF >> >> >> >> Show and Read instances should process Strings representing Haskell code, >> >> and I guess, Haskell code with the same type as the represented value. >> >> Thus the NF should be part of the formatted value. >> > _______________________________________________ >> > Libraries mailing list >> > Libraries at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From carter.schonwald at gmail.com Sat May 2 02:47:06 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 1 May 2015 22:47:06 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <5543AD64.6020500@gmail.com> Message-ID: well said brandon. FMA support is absolutely a mathematical accuracy and performance engineering thing (except when it hinder performance ). It is worth noting that most modern CPUS support several *different* versions of the FMA operation, but thats beyond the scope / goal of this proposal I think. but yeah, for all of Num's warts, probably the right place to put it, with a default implementation in terms of * and + (and compiler supplied primops for applicable prelude types) On Fri, May 1, 2015 at 2:11 PM, Brandon Allbery wrote: > On Fri, May 1, 2015 at 5:52 PM, David Feuer wrote: > >> I'm somewhat opposed to the Num class in general, and very much opposed >> to calling floating point representations "numbers" in particular. How are >> they numbers when they don't obey associative or distributive laws, let >> alone cancellation, commutativity, ....? I know Carter >> > TBH I think Num is a lost cause. If you want mathematical numbers, set up > a parallel class instead of trying to force a class designed for numbers > "in the wild" to be a pure theory class. > > This operation in particular is *all about* numbers in the wild --- it has > no place in theory, it's an optimization for hardware implementations. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From watashi at watashi.ws Sat May 2 04:02:46 2015 From: watashi at watashi.ws (watashi) Date: Fri, 1 May 2015 21:02:46 -0700 Subject: Better implementation of Show instance for Data.Scientific Message-ID: Hi there, Data.Scientific just introduced a new implementation of Show instance to display "integers" without radix point or fractional part and display "real numbers" more nicely. While I do like the new formated output, the implementation seems overkill and scared me, so I commented my thought on github: https://github.com/basvandijk/scientific/commit/9f6cbe9192d88becb7dcf3dbce3b6018ba21d9ca Now, I post here to ask for people's opinion on this as suggested by Bas van Dijk. Thanks. -- Sincerely, Zejun Wu (watashi) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Sat May 2 06:18:43 2015 From: ekmett at gmail.com (Edward Kmett) Date: Sat, 2 May 2015 02:18:43 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <5543AD64.6020500@gmail.com> Message-ID: The main problem that I find in practice with the 'just exile it to another class' argument is that it creates a pain point. Do you implement against the worse implementations of exp or do they use the specialized class that provides harder guarantees for expm1 to avoid destroying all precision very near 1? It means that anything that builds on top of the abstraction you provide gets built two ways at least. I wound up with a lot of code that was written against Monad and Functor separately and spent much of my time dealing with nonsensical "made up" organization issues like "is this version the liftM-like one or the fmap-like one?" If it is in the class then folks can just reach out and use it. (<$) being directly in Functor means you can just reach for it and get better sharing when you 'refill' a functor with a constant. If it was exiled to some other place, there'd always the worry about of whether you should implement for portability or precision and you'll never get to stop thinking about it. -Edward On Fri, May 1, 2015 at 2:00 PM, Tikhon Jelvis wrote: > Would it make sense to create a new class for operations like fma that has > accuracy guarantees as part of its typeclass laws? Or would managing a > bunch of typeclasses like that create too much syntactic, conceptual or > performance overhead for actual use? > > To me, that seems like it could be better than polluting Num?which, after > all, features prominently in the Prelude?but it might make for worse > discoverability. > > If we do add it to Num, I strongly support having a default > implementation. We don't want to make implementing a custom numeric type > any more difficult than it has to be, and somebody unfamiliar with fma > would just manually implement it without any optimizations anyhow or just > leave it out, incomplete instantiation warnings nonwithstanding. Num is > already a bit to big for casual use (I rarely care about signum and abs > myself), so making it *bigger* is not appealing. > > Personally, I'm a bit torn on the naming. Something like mulAdd or > fusedMultiplyAdd is great for non-experts, but it feels like fma is > something that we only expect experts to care about, so perhaps it's better > to name it in line with their expectations. > > On Fri, May 1, 2015 at 10:52 AM, David Feuer > wrote: > >> I'm somewhat opposed to the Num class in general, and very much opposed >> to calling floating point representations "numbers" in particular. How are >> they numbers when they don't obey associative or distributive laws, let >> alone cancellation, commutativity, ....? I know Carter disagrees with me, >> but I'll stand my ground, resolute! I suppose adding some more nonsense >> into the trash heap won't do too much more harm, but I'd much rather see >> some deeper thought about how we want to deal with floating point. >> On May 1, 2015 1:35 PM, "adam vogt" wrote: >> >>> The Num class is defined in GHC.Num, so Prelude could import GHC.Num >>> hiding (fma) to avoid having another round of prelude changes breaking code. >>> >>> >>> >>> On Fri, May 1, 2015 at 12:44 PM, Twan van Laarhoven >>> wrote: >>> >>>> I agree that Num is the place to put this function, with a default >>>> implementation. In my mind it is a special combination of (+) and (*), >>>> which both live in Num as well. >>>> >>>> I dislike the name fma, as that is a three letter acronym with no >>>> meaning to people who don't do numeric programming. And by putting the >>>> function in Num the name would end up in the Prelude. >>>> >>>> For further bikeshedding: my proposal for a name would mulAdd. But >>>> fusedMulAdd or fusedMultiplyAdd would also be fine. >>>> >>>> >>>> Twan >>>> >>>> >>>> On 2015-04-30 00:19, Ken T Takusagawa wrote: >>>> >>>>> On Wed, 29 Apr 2015, Edward Kmett wrote: >>>>> >>>>> Good point. If we wanted to we could push this all the way up to Num >>>>>> given the operations >>>>>> involved, and I could see that you could benefit from it there for >>>>>> types that have nothing >>>>>> to do with floating point, e.g. modular arithmetic could get away >>>>>> with using a single 'mod'. >>>>>> >>>>> >>>>> I too advocate this go in Num. The place I anticipate >>>>> seeing fma being used is in some polymorphic linear algebra >>>>> library, and it is not uncommon (having recently done this >>>>> myself) to do linear algebra on things that aren't >>>>> RealFloat, e.g., Rational, Complex, or number-theoretic >>>>> fields. >>>>> >>>>> --ken >>>>> _______________________________________________ >>>>> Libraries mailing list >>>>> Libraries at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>>> >>>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>> >>> >>> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >>> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bertram.felgenhauer at googlemail.com Sat May 2 09:38:14 2015 From: bertram.felgenhauer at googlemail.com (Bertram Felgenhauer) Date: Sat, 2 May 2015 11:38:14 +0200 Subject: Template Haskell changes to names and package keys In-Reply-To: References: <1430500118-sup-420@sabre> Message-ID: <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> Hi Michael, > Being able to get all the packages that have a particular name sounds > good! Note that if "-hide-package" / "-hide-all-packages" is used, I > wouldn't want those packages in the results. Same here. > Like Dan, I'm a little less keen on #1. Here's why: > [...] > > * I couldn't find any examples of its usage that would be broken by this > new semantics. I've done a search on github[1] to find usages of PkgName, > and I only found one use[2] that uses PkgName which would be affected by > this change. This example is in the TH lib itself, and so would hopefully > be fixed by this change. Did you find any uses where an existing PkgName is reused to construct a new name? I believe this would be an important data point for deciding whether renaming the constructor is a good idea or not. Basically I'm asking whether the majority of uses of PkgName "out in the wild" right now is correct or incorrect. Cheers, Bertram From rf at rufflewind.com Sat May 2 14:16:57 2015 From: rf at rufflewind.com (Phil Ruffwind) Date: Sat, 2 May 2015 10:16:57 -0400 Subject: Better implementation of Show instance for Data.Scientific In-Reply-To: References: Message-ID: I think it's strange to build the display mode into the data type Scientific, unless Scientific's primary purpose is for *display* rather than representation. For one thing, it mixes two separate roles into one. For another, it would violate the usual expectations for Eq: there would be x and y such that x == y but show x /= show y I prefer the approach of using a separate function (e.g. 'pretty'), as it gives the user more control in situations where Show's default format does not satisfy the needs. If Aeson is not formatting numbers in the way the user expects, then that's really a problem with Aeson. OTOH, I think it makes sense to automatically drop the '.0' when the number is integral, and that could be built into the Show instance. Of course, there's the question of whether such a change would break other packages... From lemming at henning-thielemann.de Sat May 2 14:23:25 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Sat, 2 May 2015 16:23:25 +0200 (CEST) Subject: Better implementation of Show instance for Data.Scientific In-Reply-To: References: Message-ID: On Sat, 2 May 2015, Phil Ruffwind wrote: > OTOH, I think it makes sense to automatically drop the '.0' when the > number is integral, and that could be built into the Show instance. Of > course, there's the question of whether such a change would break other > packages... I think if a program relies on a certain formatting in the result of 'show', it is just a bug. Btw. Show should always show valid Haskell expressions as "deriving Show"-generated implementations do. From ekmett at gmail.com Sat May 2 17:25:07 2015 From: ekmett at gmail.com (Edward Kmett) Date: Sat, 2 May 2015 13:25:07 -0400 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430496545-sup-7252@sabre> References: <1430496545-sup-7252@sabre> Message-ID: I'm fully on board with cleaning up the story around package keys, so +1 on the mission, but I have very few preferences on the particulars. -Edward On Fri, May 1, 2015 at 1:06 PM, Edward Z. Yang wrote: > In GHC 7.10, we changed the internal representation of names to > be based on package keys (base_XXXXXX) rather than package IDs > (base-4.7.0.1), however, we forgot to update the Template Haskell > API to track these changes. This lead to some bugs in TH > code which was synthesizing names by using package name and > version directly, e.g. https://ghc.haskell.org/trac/ghc/ticket/10279 > > We now propose the following changes to the TH API in order to > track these changes: > > 1. Currently, a TH NameG contains a PkgName, defined as: > > newtype PkgName = PkgName String > > This is badly misleading, even in the old world order, since > these needed version numbers as well. We propose that this be > renamed to PkgKey: > > newtype PkgKey = PkgKey String > mkPackageKey :: String -> PackageKey > mkPackageKey = PkgKey > > 2. Package keys are somewhat hard to synthesize, so we also > offer an API for querying the package database of the GHC which > is compiling your code for information about packages. So, > we introduce a new abstract data type: > > data Package > packageKey :: Package -> PkgKey > > and some functions for getting packages: > > searchPackage :: String -- Package name > -> String -- Version > -> Q [Package] > > reifyPackage :: PkgKey -> Q Package > > We could add other functions (e.g., return all packages with a > package name). > > 3. Commonly, a user wants to get the package key of the current > package. Following Simon's suggestion, this will be done by > augmenting ModuleInfo: > > data ModuleInfo = > ModuleInfo { mi_this_mod :: Module -- new > , mi_imports :: [Module] } > > We'll also add a function for accessing the module package key: > > modulePackageKey :: Module -> PkgKey > > And a convenience function for accessing the current module: > > thisPackageKey :: Q PkgKey > thisPackageKey = fmap (modulePackageKey . mi_this_mod) qReifyModule > > thisPackage :: Q Package > thisPackage = reifyPackage =<< thisPackageKey > > Discussion period: 1 month > > Thanks, > Edward > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgsloan at gmail.com Sat May 2 22:46:05 2015 From: mgsloan at gmail.com (Michael Sloan) Date: Sat, 2 May 2015 15:46:05 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> References: <1430500118-sup-420@sabre> <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> Message-ID: On Sat, May 2, 2015 at 2:38 AM, Bertram Felgenhauer < bertram.felgenhauer at googlemail.com> wrote: > Hi Michael, > > > Being able to get all the packages that have a particular name sounds > > good! Note that if "-hide-package" / "-hide-all-packages" is used, I > > wouldn't want those packages in the results. > > Same here. > > > Like Dan, I'm a little less keen on #1. Here's why: > > > [...] > > > > * I couldn't find any examples of its usage that would be broken by this > > new semantics. I've done a search on github[1] to find usages of > PkgName, > > and I only found one use[2] that uses PkgName which would be affected by > > this change. This example is in the TH lib itself, and so would > hopefully > > be fixed by this change. > > Did you find any uses where an existing PkgName is reused to construct > a new name? I believe this would be an important data point for deciding > whether renaming the constructor is a good idea or not. Basically I'm > asking whether the majority of uses of PkgName "out in the wild" right > now is correct or incorrect. > I didn't search for that, since I was searching for explicit uses of PkgName. The results for NameG are pretty interesting, though: https://github.com/search?l=haskell&q=NameG&type=Code&utf8=%E2%9C%93 It seems like most of the results are either `Lift`ing a PkgName or re-using a PkgName from another name. However, there are a fair number of uses which explicitly construct the name, though. In particular, there are a fair number of usages of this to generate references to the tuple constructors in GHC.Prim. Interestingly, these usages of (mkPkgName "ghc-prim") don't have version numbers at all. This implies that the behavior is to select the version of the package which is currently being used. Considering how much more convenient this is than manually looking up package names, I'm strongly in favor of keeping PkgName and its current behavior, and extending it to handle package keys. Ideally, also documenting the different formats it expects. In particular, "name", "name-version", and "name-version-key" (I'm guessing). The issue here is that we may be linking against two different packages which are named the same thing, right? This was always a possibility, and having package keys just makes it so that we need the ability to disambiguate packages beyond version numbers. So, how about for this rare-ish case where there we're linking against two different versions of the foo package, and the TH manually generates a (PkgName "foo"), we generate an ambiguity error. If we force users to search for and specify fully unambiguous package names then that pushes the boilerplate of package selection out to TH usage. It seems like pretty much everyone will want "select the version of the package that I'm using." That said, I'm also in favor of adding the function mentioned in #2, for those cases where the default isn't sufficient (I'm anticipating very few of these). -Michael Cheers, > > Bertram > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sun May 3 07:42:12 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 3 May 2015 16:42:12 +0900 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: Hi, little information. General CPUs use term of "FMA" for "Mul + Add" operation and implement special instructions. x86(AMD64, Intel64) has FMA instructions: FMADD132PD, ... ARM has FMA instructions: VMLA, ... In DSP culture, it's called "MAC(Multiply and Accumulator)". Traditional DSPs have MAC(Multiply and Accumulator) instructions: TI's C67 has MAC instructions: MAC, ... If you map "fma" function to cpu's raw instruction, be careful for rounding and saturation mode. BTW, "FMA" operation is defined in IEEE754-2008 standard. Regards, Takenobu 2015-04-29 18:19 GMT+09:00 Henning Thielemann : > > On Wed, 29 Apr 2015, Levent Erkok wrote: > > This proposal is very much in the spirit of the earlier proposal on >> adding new float/double functions; for >> instance see here: >> https://mail.haskell.org/pipermail/libraries/2014-April/022667.html >> > > Btw. what was the final decision with respect to log1p and expm1? > > I suggest that the decision for 'fma' will be made consistently with > 'log1p' and 'expm1'. > > "fma" (a.k.a. fused-multiply-add) is one of those functions; which is the >> workhorse in many HPC applications. >> The idea is to multiply two floats and add a third with just one >> rounding, and thus preserving more precision. >> There are a multitude of applications for this operation in engineering >> data-analysis, and modern processors >> come with custom implementations and a lot of hardware to support it >> natively. >> > > Ok, the proposal is about increasing precision. One could also hope that a > single fma operation is faster than separate addition and multiplication > but as far as I know, fma can even be slower since it has more data > dependencies. > > I think the proposal is rather straightforward, and should be >> noncontroversial. To wit, we shall add a new >> method to the RealFloat class: >> >> class (RealFrac a, Floating a) => RealFloat a where >> ... >> fma :: a -> a -> a -> a >> > > > RealFloat excludes Complex. > > > There should be no default definitions; as an incorrect (two-rounding >> version) would essentially beat the purpose of having fma in the first >> place. >> > > I just read again the whole expm1 thread and default implementations with > possible loss of precision seem to be the best option. This way, one can > mechanically replace all occurrences of (x*y+z) by (fma x y z) and will not > make anything worse. Types with a guaranteed high precision should be put > in a Fused class. > > > While the name "fma" is well-established in the arithmetic/hardware >> community and in the C-library, we can also go with "fusedMultiplyAdd," if >> that is deemed more clear. >> > > Although I like descriptive names, the numeric classes already contain > mostly abbreviations (abs, exp, sin, tanh, ...) Thus I would prefer the > abbreviation for consistency. Btw. in DSP 56002 the same operation is > called MAC (multiply-accumulate). > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sun May 3 14:27:19 2015 From: david.feuer at gmail.com (David Feuer) Date: Sun, 3 May 2015 10:27:19 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: We have (almost) no tradition of using CPU instruction names for our own function, and I don't see why now is the time to start. To take a recent example, we have countLeadingZeros and countTrailingZeros rather than clz, ctz, ctlz, cttz, bsf, bsr, etc. We also have popCount instead of popcnt, and use shiftR and shiftL instead of things like shl, shr, sla, sal, sra, sar, etc. Thus I am -1 on calling this thing fma. multiplyAdd seems more reasonable to me. On Sun, May 3, 2015 at 3:42 AM, Takenobu Tani wrote: > Hi, > > little information. > > General CPUs use term of "FMA" for "Mul + Add" operation > and implement special instructions. > > x86(AMD64, Intel64) has FMA instructions: > FMADD132PD, ... > > ARM has FMA instructions: > VMLA, ... > > > In DSP culture, it's called "MAC(Multiply and Accumulator)". > Traditional DSPs have MAC(Multiply and Accumulator) instructions: > > TI's C67 has MAC instructions: > MAC, ... > > > If you map "fma" function to cpu's raw instruction, > be careful for rounding and saturation mode. > > > BTW, "FMA" operation is defined in IEEE754-2008 standard. > > > Regards, > Takenobu > > 2015-04-29 18:19 GMT+09:00 Henning Thielemann < > lemming at henning-thielemann.de>: > >> >> On Wed, 29 Apr 2015, Levent Erkok wrote: >> >> This proposal is very much in the spirit of the earlier proposal on >>> adding new float/double functions; for >>> instance see here: >>> https://mail.haskell.org/pipermail/libraries/2014-April/022667.html >>> >> >> Btw. what was the final decision with respect to log1p and expm1? >> >> I suggest that the decision for 'fma' will be made consistently with >> 'log1p' and 'expm1'. >> >> "fma" (a.k.a. fused-multiply-add) is one of those functions; which is >>> the workhorse in many HPC applications. >>> The idea is to multiply two floats and add a third with just one >>> rounding, and thus preserving more precision. >>> There are a multitude of applications for this operation in engineering >>> data-analysis, and modern processors >>> come with custom implementations and a lot of hardware to support it >>> natively. >>> >> >> Ok, the proposal is about increasing precision. One could also hope that >> a single fma operation is faster than separate addition and multiplication >> but as far as I know, fma can even be slower since it has more data >> dependencies. >> >> I think the proposal is rather straightforward, and should be >>> noncontroversial. To wit, we shall add a new >>> method to the RealFloat class: >>> >>> class (RealFrac a, Floating a) => RealFloat a where >>> ... >>> fma :: a -> a -> a -> a >>> >> >> >> RealFloat excludes Complex. >> >> >> There should be no default definitions; as an incorrect (two-rounding >>> version) would essentially beat the purpose of having fma in the first >>> place. >>> >> >> I just read again the whole expm1 thread and default implementations with >> possible loss of precision seem to be the best option. This way, one can >> mechanically replace all occurrences of (x*y+z) by (fma x y z) and will not >> make anything worse. Types with a guaranteed high precision should be put >> in a Fused class. >> >> >> While the name "fma" is well-established in the arithmetic/hardware >>> community and in the C-library, we can also go with "fusedMultiplyAdd," if >>> that is deemed more clear. >>> >> >> Although I like descriptive names, the numeric classes already contain >> mostly abbreviations (abs, exp, sin, tanh, ...) Thus I would prefer the >> abbreviation for consistency. Btw. in DSP 56002 the same operation is >> called MAC (multiply-accumulate). >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Sun May 3 20:22:00 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 03 May 2015 13:22:00 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: References: <1430500118-sup-420@sabre> <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> Message-ID: <1430681340-sup-8728@sabre> > I didn't search for that, since I was searching for explicit uses of > PkgName. The results for NameG are pretty interesting, though: > > https://github.com/search?l=haskell&q=NameG&type=Code&utf8=%E2%9C%93 > > It seems like most of the results are either `Lift`ing a PkgName or > re-using a PkgName from another name. However, there are a fair number of > uses which explicitly construct the name, though. In particular, there are > a fair number of usages of this to generate references to the tuple > constructors in GHC.Prim. > > Interestingly, these usages of (mkPkgName "ghc-prim") don't have version > numbers at all. This implies that the behavior is to select the version of > the package which is currently being used. Well, the reason they don't have version numbers is because ghc-prim (and a few other packages) are internally considered wired-in packages, so their package keys are always some specific, hard-coded value. There is no "implied behavior", it's just an implementation detail that this TH API happens to expose. (There is one concrete consequence of this, which is that it's not possible to have two versions of ghc-prim simultaneously in the same executable.) Prior to 7.10, this manifested as ghc-prim having PkgName "ghc-prim", but lens having PkgName "lens-0.3.2". > So, how about for this rare-ish case where there we're linking against two > different versions of the foo package, and the TH manually generates a > (PkgName "foo"), we generate an ambiguity error. > > If we force users to search for and specify fully unambiguous package names > then that pushes the boilerplate of package selection out to TH usage. It > seems like pretty much everyone will want "select the version of the > package that I'm using." That said, I'm also in favor of adding the > function mentioned in #2, for those cases where the default isn't > sufficient (I'm anticipating very few of these). I looked at the results. https://github.com/bitemyapp/hackage-packages/blob/fd9649f426254c0581bd976789a1c384eda0e3c9/lighttpd-conf-0.4/src/Lighttpd/Conf/Instances/Lift.hs This is just very wrong, they should clearly switch to thisPackageKey when we have it https://github.com/s9gf4ult/letqq/blob/472d665cf64ec60b5f02f200c2b4c54fc6580f3f/src/THTH.hs I'm not really sure what this is supposed to be, besides some meta-meta-programming library. It's clearly prototype code and I don't think they will mind if it stops working. https://github.com/suhailshergill/liboleg/blob/57673d01c66ab9f284579a40aed059ed4617ce6c/Data/Symbolic/TypedCodeAux.hs https://github.com/bitemyapp/hackage-packages/blob/fd9649f426254c0581bd976789a1c384eda0e3c9/liboleg-2010.1.10.0/Data/Symbolic/TypedCodeAux.hs So, this piece of code is something that really ought to be in the standard library, if it isn't already. Basically, the problem is that when you quote an identifier, e.g. [| foo |], you get the *un-renamed* syntax (a variable foo) rather than the renamed syntax (somepkg:Data.Foo.foo). This is very useful, too useful to be in another library. https://github.com/andrep/hoc/blob/bfd0391bf0dab4bda5d6a5f7845fab19f8e4b9a9/hoc/HOC/HOC/TH.hs https://github.com/mokus0/hoc/blob/b6fa3906b8e1e61bed0435a8d497a132e5905227/hoc/HOC/HOC/TH.hs Wouldn't be broken. https://github.com/DavidAlphaFox/ghc/blob/6a5d9fa147b8abc370159d87e9c3dac87171cbd5/libraries/template-haskell/Language/Haskell/TH/Quote.hs This (and the next page of results) are from the template-haskell library. We can fix these ourselves when we make these changes. To sum up, I still think we should rename, and I think we should also add some functionality to the standard library for getting the renamed version of a TH syntax tree. Edward From erkokl at gmail.com Sun May 3 21:11:26 2015 From: erkokl at gmail.com (Levent Erkok) Date: Sun, 3 May 2015 14:11:26 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: Thank you for all the feedback on this proposal. Based on the feedback, I came to conclude that the original idea did not really capture what I really was after, and hence I think this proposal needs to be shelved for the time being. I want to summarize the points made so far: * Almost everyone agrees that we should have this functionality available. (But see below for the direction I want to take it in.) * There's some disagreement on the name chosen, but I think this is less important for the time being. * The biggest gripe is where does "fma" really belong. Original suggestion was 'RealFloat', but people pointed 'Num' is just a good place as well. * Most folks want a default definition, and see "fma" as an optimization. It is these last two points actually that convinced me this proposal is not really what I want to have. I do not see "fma" as an optimization. In particular, I'd be very concerned if the compiler substituted "fma x y z" for "x*y+z". The entire reason why IEEE754 has an fma operation is because those two expressions have different values in general. By the same token, I'm also against providing a default implementation. I see this not as an increased-precision issue, but rather a semantic one; where "x*y+z" and "fma x y z" *should* produce two different values, per the IEEE754 spec. It's not really an optimization, but how floating-point values work. In that sense "fma" is a separate operation that's related to multiplication and addition, but is not definable in those terms alone. Having said that, it was also pointed out that for non-float values this can act as an optimization. (Modular arithmetic was given as an example.) I'd think that functionality is quite different than the original proposal, and perhaps should be tackled separately. My original proposal was not aiming for that particular use case. My original motivation was to give Haskell access to the floating-point circuitry that hardware-manufacturers are putting a lot of effort and energy into. It's a shame that modern processors provide a ton of instructions around floating-point operations, but such operations are simply very hard to use from many high-level languages, including Haskell. Two other points were raised, that also convinced me to seek an alternative solution: * Tikhon Jelvis suggested these functions should be put in a different class, which suggests that we're following IEEE754, and not some idealized model of numbers. I think this suggestion is spot on, and is very much in line with what I wanted to have. * Takebonu Tani kindly pointed that a discussion of floats in the absence of rounding-modes is a moot one, as the entire semantics is based on rounding. Haskell simply picks "RoundNearestTiesToEven," but there are 4 other rounding modes defined by IEEE754, and I think we need a way to access those from Haskell in a convenient way. Based on this analysis, I'm withdrawing the original proposal. I think fma and other floating-point arithmetic operations are very important to support properly, but it should not be done by tacking them on to Num or RealFloat; but rather in a new class that also considers rounding-mode properly. The advantage of the "separate" class approach is, of course, I (or someone else) can create such a class and push it on to hackage, using FFI to delegate the task of implementation to the land-of-C, by supporting rounding modes and other floating-point weirdness appropriately. Once that class stabilizes and its details are ironed out, then we can imagine cooperating with GHC folks to actually bypass the FFI and directly generate native code whenever possible. This is the direction I intend to move on. Please drop me a line if you'd like to help out and/or have any feedback. Thanks! -Levent. On Sun, May 3, 2015 at 7:27 AM, David Feuer wrote: > We have (almost) no tradition of using CPU instruction names for our own > function, and I don't see why now is the time to start. To take a recent > example, we have countLeadingZeros and countTrailingZeros rather than clz, > ctz, ctlz, cttz, bsf, bsr, etc. We also have popCount instead of popcnt, > and use shiftR and shiftL instead of things like shl, shr, sla, sal, sra, > sar, etc. Thus I am -1 on calling this thing fma. multiplyAdd seems more > reasonable to me. > > On Sun, May 3, 2015 at 3:42 AM, Takenobu Tani > wrote: > >> Hi, >> >> little information. >> >> General CPUs use term of "FMA" for "Mul + Add" operation >> and implement special instructions. >> >> x86(AMD64, Intel64) has FMA instructions: >> FMADD132PD, ... >> >> ARM has FMA instructions: >> VMLA, ... >> >> >> In DSP culture, it's called "MAC(Multiply and Accumulator)". >> Traditional DSPs have MAC(Multiply and Accumulator) instructions: >> >> TI's C67 has MAC instructions: >> MAC, ... >> >> >> If you map "fma" function to cpu's raw instruction, >> be careful for rounding and saturation mode. >> >> >> BTW, "FMA" operation is defined in IEEE754-2008 standard. >> >> >> Regards, >> Takenobu >> >> 2015-04-29 18:19 GMT+09:00 Henning Thielemann < >> lemming at henning-thielemann.de>: >> >>> >>> On Wed, 29 Apr 2015, Levent Erkok wrote: >>> >>> This proposal is very much in the spirit of the earlier proposal on >>>> adding new float/double functions; for >>>> instance see here: >>>> https://mail.haskell.org/pipermail/libraries/2014-April/022667.html >>>> >>> >>> Btw. what was the final decision with respect to log1p and expm1? >>> >>> I suggest that the decision for 'fma' will be made consistently with >>> 'log1p' and 'expm1'. >>> >>> "fma" (a.k.a. fused-multiply-add) is one of those functions; which is >>>> the workhorse in many HPC applications. >>>> The idea is to multiply two floats and add a third with just one >>>> rounding, and thus preserving more precision. >>>> There are a multitude of applications for this operation in engineering >>>> data-analysis, and modern processors >>>> come with custom implementations and a lot of hardware to support it >>>> natively. >>>> >>> >>> Ok, the proposal is about increasing precision. One could also hope that >>> a single fma operation is faster than separate addition and multiplication >>> but as far as I know, fma can even be slower since it has more data >>> dependencies. >>> >>> I think the proposal is rather straightforward, and should be >>>> noncontroversial. To wit, we shall add a new >>>> method to the RealFloat class: >>>> >>>> class (RealFrac a, Floating a) => RealFloat a where >>>> ... >>>> fma :: a -> a -> a -> a >>>> >>> >>> >>> RealFloat excludes Complex. >>> >>> >>> There should be no default definitions; as an incorrect (two-rounding >>>> version) would essentially beat the purpose of having fma in the first >>>> place. >>>> >>> >>> I just read again the whole expm1 thread and default implementations >>> with possible loss of precision seem to be the best option. This way, one >>> can mechanically replace all occurrences of (x*y+z) by (fma x y z) and will >>> not make anything worse. Types with a guaranteed high precision should be >>> put in a Fused class. >>> >>> >>> While the name "fma" is well-established in the arithmetic/hardware >>>> community and in the C-library, we can also go with "fusedMultiplyAdd," if >>>> that is deemed more clear. >>>> >>> >>> Although I like descriptive names, the numeric classes already contain >>> mostly abbreviations (abs, exp, sin, tanh, ...) Thus I would prefer the >>> abbreviation for consistency. Btw. in DSP 56002 the same operation is >>> called MAC (multiply-accumulate). >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >>> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roma at ro-che.info Sun May 3 22:03:26 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Mon, 04 May 2015 01:03:26 +0300 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: <55469B2E.2000501@ro-che.info> Thanks for taking time to write this, Levent. Now that you explain this in such detail, it's clear why implementing fma in terms of add and multiply is wrong. I also have to admit that upon the first reading of your proposal, I confused RealFloat with RealFrac. Since RealFloat should only be implemented by actual floating-point types, I retract my earlier objection. And the idea of putting the IEEE754-specific functions in a separate class (or even module) sounds reasonable, too. On 04/05/15 00:11, Levent Erkok wrote: > Thank you for all the feedback on this proposal. Based on the feedback, > I came to conclude that the original idea did not really capture what I > really was after, and hence I think this proposal needs to be shelved > for the time being. > > I want to summarize the points made so far: > > * Almost everyone agrees that we should have this functionality > available. (But see below for the direction I want to take it in.) > * There's some disagreement on the name chosen, but I think this is > less important for the time being. > * The biggest gripe is where does "fma" really belong. Original > suggestion was 'RealFloat', but people pointed 'Num' is just a good > place as well. > * Most folks want a default definition, and see "fma" as an > optimization. > > It is these last two points actually that convinced me this proposal is > not really what I want to have. I do not see "fma" as an optimization. > In particular, I'd be very concerned if the compiler substituted "fma x > y z" for "x*y+z". The entire reason why IEEE754 has an fma operation is > because those two expressions have different values in general. By the > same token, I'm also against providing a default implementation. I see > this not as an increased-precision issue, but rather a semantic one; > where "x*y+z" and "fma x y z" *should* produce two different values, per > the IEEE754 spec. It's not really an optimization, but how > floating-point values work. In that sense "fma" is a separate operation > that's related to multiplication and addition, but is not definable in > those terms alone. > > Having said that, it was also pointed out that for non-float values this > can act as an optimization. (Modular arithmetic was given as an > example.) I'd think that functionality is quite different than the > original proposal, and perhaps should be tackled separately. My original > proposal was not aiming for that particular use case. > > My original motivation was to give Haskell access to the floating-point > circuitry that hardware-manufacturers are putting a lot of effort and > energy into. It's a shame that modern processors provide a ton of > instructions around floating-point operations, but such operations are > simply very hard to use from many high-level languages, including Haskell. > > Two other points were raised, that also convinced me to seek an > alternative solution: > > * Tikhon Jelvis suggested these functions should be put in a > different class, which suggests that we're following IEEE754, and not > some idealized model of numbers. I think this suggestion is spot on, and > is very much in line with what I wanted to have. > * Takebonu Tani kindly pointed that a discussion of floats in the > absence of rounding-modes is a moot one, as the entire semantics is > based on rounding. Haskell simply picks "RoundNearestTiesToEven," but > there are 4 other rounding modes defined by IEEE754, and I think we need > a way to access those from Haskell in a convenient way. > > Based on this analysis, I'm withdrawing the original proposal. I think > fma and other floating-point arithmetic operations are very important to > support properly, but it should not be done by tacking them on to Num or > RealFloat; but rather in a new class that also considers rounding-mode > properly. > > The advantage of the "separate" class approach is, of course, I (or > someone else) can create such a class and push it on to hackage, using > FFI to delegate the task of implementation to the land-of-C, by > supporting rounding modes and other floating-point weirdness > appropriately. Once that class stabilizes and its details are ironed > out, then we can imagine cooperating with GHC folks to actually bypass > the FFI and directly generate native code whenever possible. > > This is the direction I intend to move on. Please drop me a line if > you'd like to help out and/or have any feedback. > > Thanks! > > -Levent. > > > On Sun, May 3, 2015 at 7:27 AM, David Feuer > wrote: > > We have (almost) no tradition of using CPU instruction names for our > own function, and I don't see why now is the time to start. To take > a recent example, we have countLeadingZeros and countTrailingZeros > rather than clz, ctz, ctlz, cttz, bsf, bsr, etc. We also have > popCount instead of popcnt, and use shiftR and shiftL instead of > things like shl, shr, sla, sal, sra, sar, etc. Thus I am -1 on > calling this thing fma. multiplyAdd seems more reasonable to me. > > On Sun, May 3, 2015 at 3:42 AM, Takenobu Tani > wrote: > > Hi, > > little information. > > General CPUs use term of "FMA" for "Mul + Add" operation > and implement special instructions. > > x86(AMD64, Intel64) has FMA instructions: > FMADD132PD, ... > > ARM has FMA instructions: > VMLA, ... > > > In DSP culture, it's called "MAC(Multiply and Accumulator)". > Traditional DSPs have MAC(Multiply and Accumulator) instructions: > > TI's C67 has MAC instructions: > MAC, ... > > > If you map "fma" function to cpu's raw instruction, > be careful for rounding and saturation mode. > > > BTW, "FMA" operation is defined in IEEE754-2008 standard. > > > Regards, > Takenobu > > 2015-04-29 18:19 GMT+09:00 Henning Thielemann > >: > > > On Wed, 29 Apr 2015, Levent Erkok wrote: > > This proposal is very much in the spirit of the earlier > proposal on adding new float/double functions; for > instance see > here: https://mail.haskell.org/pipermail/libraries/2014-April/022667.html > > > Btw. what was the final decision with respect to log1p and > expm1? > > I suggest that the decision for 'fma' will be made > consistently with 'log1p' and 'expm1'. > > "fma" (a.k.a. fused-multiply-add) is one of those > functions; which is the workhorse in many HPC applications. > The idea is to multiply two floats and add a third with > just one rounding, and thus preserving more precision. > There are a multitude of applications for this operation > in engineering data-analysis, and modern processors > come with custom implementations and a lot of hardware > to support it natively. > > > Ok, the proposal is about increasing precision. One could > also hope that a single fma operation is faster than > separate addition and multiplication but as far as I know, > fma can even be slower since it has more data dependencies. > > I think the proposal is rather straightforward, and > should be noncontroversial. To wit, we shall add a new > method to the RealFloat class: > > class (RealFrac a, Floating a) => RealFloat a where > ... > fma :: a -> a -> a -> a > > > > RealFloat excludes Complex. > > > There should be no default definitions; as an incorrect > (two-rounding version) would essentially beat the > purpose of having fma in the first place. > > > I just read again the whole expm1 thread and default > implementations with possible loss of precision seem to be > the best option. This way, one can mechanically replace all > occurrences of (x*y+z) by (fma x y z) and will not make > anything worse. Types with a guaranteed high precision > should be put in a Fused class. > > > While the name "fma" is well-established in the > arithmetic/hardware community and in the C-library, we > can also go with "fusedMultiplyAdd," if that is deemed > more clear. > > > Although I like descriptive names, the numeric classes > already contain mostly abbreviations (abs, exp, sin, tanh, > ...) Thus I would prefer the abbreviation for consistency. > Btw. in DSP 56002 the same operation is called MAC > (multiply-accumulate). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From mwm at mired.org Sun May 3 23:05:06 2015 From: mwm at mired.org (Mike Meyer) Date: Sun, 3 May 2015 18:05:06 -0500 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: On Sun, May 3, 2015 at 4:11 PM, Levent Erkok wrote: * Tikhon Jelvis suggested these functions should be put in a different class, which suggests that we're following IEEE754, and not some idealized model of numbers. I think this suggestion is spot on, and is very much in line with what I wanted to have. This is very much in line with a suggestion I've been toying with for a long time. Basically, we have three different ideas for how floats should behave, and the current implementation isn't any of them. So I've been thinking that we ought to deal with this by moving Float out of Prelude - or at least large chunks of it. The three different models are: 1) Real numbers. We aren't going to get those. 2) IEEE Floats. This is what we've got, except as noted, there are lots of things that come with this that we don't provide. 3) Floats that obey the laws of Num. We don't get that, mostly because getting #2 breaks things. The breakage of #3 causes people creates behavior that's surprising - at least to people who aren't familiar with IEEE Floats. So the proposal I've been toying with was something along the lines of breaking RealFloat up along class lines. Those classes where RealFloat obeyed the class laws and IEEE Float behavior would stay in RealFloat. The rest would move out, and could be gotten by importing either Data.Float.IEEE or Data.Float.Num (or some such). Ideally, this will leave enough floating point behavior in the Prelude that doing simple calculations would just work - at least as well as it ever did, anyway. When you start doing things that can currently generate surprising results, you will need to import one of the two options. Figuring out which one means there's a chance you'll also figure out why you sometimes get those surprising results. From carter.schonwald at gmail.com Sun May 3 23:50:08 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 3 May 2015 19:50:08 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: .... how would you have an implementation of finite precision floating point that has the "expected" exact algebraic laws for * and +? I would argue that Float and Double do satisfy a form of the standard algebric laws where equality is approximate. eg (a+(b+c)) - ((a+b)+c) <= \epsilon, where epsilon is some constant multiple of max(ulp(a),ulp(b),ulp(c)). (a similar idea applies to pretty much any other algebraic law you can state, such as distributivity etc) I do think that it'd be useful if the RealFloat class provided an ulp function (unit of least precision), which is available as part of any IEEE compliant c float library. there are MANY computable number represntations where the *exact* algebraic laws dont hold, but this *approximate* form which provides some notion of bounded forwards/backwards relative/absolute error bound guarantee in a particularly strong way. i think we should figure out articulating laws that play nice for both the *exact* and *approximate* universes. the On Sun, May 3, 2015 at 7:05 PM, Mike Meyer wrote: > On Sun, May 3, 2015 at 4:11 PM, Levent Erkok wrote: > * Tikhon Jelvis suggested these functions should be put in a different > class, which suggests that we're following IEEE754, and not some idealized > model of numbers. I think this suggestion is spot on, and is very much in > line with what I wanted to have. > > This is very much in line with a suggestion I've been toying with for > a long time. Basically, we have three different ideas for how floats > should behave, and the current implementation isn't any of them. So > I've been thinking that we ought to deal with this by moving Float out > of Prelude - or at least large chunks of it. > > The three different models are: > > 1) Real numbers. We aren't going to get those. > > 2) IEEE Floats. This is what we've got, except as noted, there are > lots of things that come with this that we don't provide. > > 3) Floats that obey the laws of Num. We don't get that, mostly because > getting #2 breaks things. > > The breakage of #3 causes people creates behavior that's surprising - > at least to people who aren't familiar with IEEE Floats. > > So the proposal I've been toying with was something along the lines of > breaking RealFloat up along class lines. Those classes where RealFloat > obeyed the class laws and IEEE Float behavior would stay in > RealFloat. The rest would move out, and could be gotten by importing > either Data.Float.IEEE or Data.Float.Num (or some such). > > Ideally, this will leave enough floating point behavior in the Prelude > that doing simple calculations would just work - at least as well as > it ever did, anyway. When you start doing things that can currently > generate surprising results, you will need to import one of the two > options. Figuring out which one means there's a chance you'll also > figure out why you sometimes get those surprising results. > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwm at mired.org Mon May 4 00:13:42 2015 From: mwm at mired.org (Mike Meyer) Date: Sun, 3 May 2015 19:13:42 -0500 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: On Sun, May 3, 2015 at 6:50 PM, Carter Schonwald wrote: > .... how would you have an implementation of finite precision floating > point that has the "expected" exact algebraic laws for * and +? > That's model #1 that we can't have. So you don't. > I would argue that Float and Double do satisfy a form of the standard > algebric laws where equality is approximate. > > eg (a+(b+c)) - ((a+b)+c) <= \epsilon, where epsilon is some constant > multiple of max(ulp(a),ulp(b),ulp(c)). > (a similar idea applies to pretty much any other algebraic law you can > state, such as distributivity etc) > So how do you fix the fact that any comparison with a NaN and a non-NaN is false? Among other IEEE oddities. > I do think that it'd be useful if the RealFloat class provided an ulp > function (unit of least precision), which is available as part of any IEEE > compliant c float library. > > there are MANY computable number represntations where the *exact* > algebraic laws dont hold, but this *approximate* form which provides some > notion of bounded forwards/backwards relative/absolute error bound > guarantee in a particularly strong way. > True. That's the root of the problem the proposal is trying to solve. > i think we should figure out articulating laws that play nice for both the > *exact* and *approximate* universes. > We also need laws that play nice for the IEEE universe, because people doing serious numerical work want that one. I believe you will wind up with two different sets of laws, which is why I proposed taking the parts that don't agree out of the Prelude, and letting users import the ones they want to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon May 4 08:14:23 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 04 May 2015 10:14:23 +0200 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: <1430727263.1930.9.camel@joachim-breitner.de> Hi, Am Sonntag, den 03.05.2015, 14:11 -0700 schrieb Levent Erkok: > Based on this analysis, I'm withdrawing the original proposal. I think > fma and other floating-point arithmetic operations are very important > to support properly, but it should not be done by tacking them on to > Num or RealFloat; but rather in a new class that also considers > rounding-mode properly. > does it really have to be a class? How much genuinely polymorphic code is there out there that yet requires this precise handling of precision? Have you considered adding it as monomorphic functions fmaDouble, fmaFloat etc. on hackage, using FFI? Then those who need these functions can start to use them. Furthermore you can start getting the necessary primops supported in GHC, and have your library transparently use them when available. And only then, when we have the implementation in place and actual users, we can evaluate whether we need an abstract class for this. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From gale at sefer.org Mon May 4 10:00:09 2015 From: gale at sefer.org (Yitzchak Gale) Date: Mon, 4 May 2015 13:00:09 +0300 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: Levent Erkok wrote: > ...I think this proposal needs to be shelved for the time being. Nevertheless, I vote for doing it now. A better, more featureful, and more principled approach to FP is definitely needed. It would be great if we could tackle that and finally solve it - and I think we can. But that's a huge issue which has been discussed extensively in the past, and orthogonal to Levant's proposal. In the meantime, adding new functions that provide access to more FP functionality without adding any significant new weirdness are welcome, and will naturally flow into whatever future solution to the broader FP issue we implement. It makes little difference whether or not we provide a bad but working default implementation; my vote is to provide it. It will prevent breakage in case someone happens to have implemented a manual RealFloat instance out there somewhere, and it won't affect the standard instances because we'll provide implementations for those. Obviously a clear explanatory Haddock comment would be required. Even better, trigger a warning if an instance does not provide an explicit implementation, but I'm not sure if that's possible. I'm still in favor of doing Levant's proposal now even if the consensus is to omit the default. I vote for the usual practice of a human-readable name, but don't let bikeshedding hold this back. Thanks, Yitz From merijn at inconsistent.nl Mon May 4 11:49:21 2015 From: merijn at inconsistent.nl (Merijn Verstraaten) Date: Mon, 4 May 2015 13:49:21 +0200 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: Message-ID: <00052C91-86EE-43F7-87A5-66B57F1AC8CB@inconsistent.nl> I would suggest adding the relevant high-precision versions as direct functions on Float/Double and then add the "better" versions as part of Num as was suggested. Anyone who *needs* the precision can then get it by using the functions directly and forcing a specific type (since I don't think polymorphic code and this sort of precision demands fit well together). This way it's *possible* to write code with the required precision for Float/Double and anyone using Num gets an optional precision boost. Cheers, Merijn > On 4 May 2015, at 12:00, Yitzchak Gale wrote: > > Levent Erkok wrote: >> ...I think this proposal needs to be shelved for the time being. > > Nevertheless, I vote for doing it now. > > A better, more featureful, and more principled approach to > FP is definitely needed. It would be great if we could tackle > that and finally solve it - and I think we can. But that's a > huge issue which has been discussed extensively in the > past, and orthogonal to Levant's proposal. > > In the meantime, adding new functions that provide access > to more FP functionality without adding any significant > new weirdness are welcome, and will naturally flow into > whatever future solution to the broader FP issue we > implement. > > It makes little difference whether or not we provide a bad > but working default implementation; my vote is to > provide it. It will prevent breakage in case someone > happens to have implemented a manual RealFloat instance > out there somewhere, and it won't affect the standard > instances because we'll provide implementations for > those. Obviously a clear explanatory Haddock comment > would be required. Even better, trigger a warning if an > instance does not provide an explicit implementation, but > I'm not sure if that's possible. I'm still in favor of doing > Levant's proposal now even if the consensus is to omit > the default. > > I vote for the usual practice of a human-readable > name, but don't let bikeshedding hold this back. > > Thanks, > Yitz > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From eir at cis.upenn.edu Mon May 4 11:54:46 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 4 May 2015 07:54:46 -0400 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430500118-sup-420@sabre> References: <1430500118-sup-420@sabre> Message-ID: <5DCCF1B4-D0C6-4041-86D0-DD898D7EA0FD@cis.upenn.edu> Thanks for spearheading this update. I'm +1 on #1, in particular because I don't think TH should try too hard to have a stable API. I'm +0.5 on #2, because the interface as designed below is too weak. I'm +1 for a stronger interface. For example: packageVersionString :: Package -> String -- maybe use a newtype here? or the proper Cabal datatype? packageName :: Package -> String -- use the old PkgName here? or is that confusing? packageDependencies :: Package -> Q [Package] -- could be very useful in printing debugging information. Then, if you're writing a library that -- is sensitive to dependency versions, you can print this info out when you panic I'm -1 on #3 if it will break code. You can already get current package information through the `qLocation` function. One need I've had a few times is to be able to get a version number for the current package, just for UI nicety. It would be great if this change could support such an operation (like my packageVersionString, above). Thanks! Richard PS: I wonder if this proposal and debate wouldn't be easier to follow in wiki format. That way, the design could evolve over time... On May 1, 2015, at 1:09 PM, Edward Z. Yang wrote: > In GHC 7.10, we changed the internal representation of names to > be based on package keys (base_XXXXXX) rather than package IDs > (base-4.7.0.1), however, we forgot to update the Template Haskell > API to track these changes. This lead to some bugs in TH > code which was synthesizing names by using package name and > version directly, e.g. https://ghc.haskell.org/trac/ghc/ticket/10279 > > We now propose the following changes to the TH API in order to > track these changes: > > 1. Currently, a TH NameG contains a PkgName, defined as: > > newtype PkgName = PkgName String > > This is badly misleading, even in the old world order, since > these needed version numbers as well. We propose that this be > renamed to PkgKey: > > newtype PkgKey = PkgKey String > mkPackageKey :: String -> PackageKey > mkPackageKey = PkgKey > > 2. Package keys are somewhat hard to synthesize, so we also > offer an API for querying the package database of the GHC which > is compiling your code for information about packages. So, > we introduce a new abstract data type: > > data Package > packageKey :: Package -> PkgKey > > and some functions for getting packages: > > searchPackage :: String -- Package name > -> String -- Version > -> Q [Package] > > reifyPackage :: PkgKey -> Q Package > > We could add other functions (e.g., return all packages with a > package name). > > 3. Commonly, a user wants to get the package key of the current > package. Following Simon's suggestion, this will be done by > augmenting ModuleInfo: > > data ModuleInfo = > ModuleInfo { mi_this_mod :: Module -- new > , mi_imports :: [Module] } > > We'll also add a function for accessing the module package key: > > modulePackageKey :: Module -> PkgKey > > And a convenience function for accessing the current module: > > thisPackageKey :: Q PkgKey > thisPackageKey = fmap (modulePackageKey . mi_this_mod) qReifyModule > > thisPackage :: Q Package > thisPackage = reifyPackage =<< thisPackageKey > > Discussion period: 1 month > > Thanks, > Edward > > (apologies to cc'd folks, I sent from my wrong email address) > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From carter.schonwald at gmail.com Mon May 4 13:07:33 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 4 May 2015 09:07:33 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <00052C91-86EE-43F7-87A5-66B57F1AC8CB@inconsistent.nl> References: <00052C91-86EE-43F7-87A5-66B57F1AC8CB@inconsistent.nl> Message-ID: Agreed. It will be a boon for dot product powered algorithms every where. There's a valid argument for in parallel exploration of systematically better abstractions for the future, but that shouldn't preclude making core tooling and primops a bit better in time for 7.12 I'll start investigating adding the applicable primops to ghc on all supported platforms. Most of the widely used ones have direct instruction support, but some may have to call out to the c fma, eg unregisterized builds and perhaps x86_32 unless I'm mistaken on the latter. On Monday, May 4, 2015, Merijn Verstraaten wrote: > I would suggest adding the relevant high-precision versions as direct > functions on Float/Double and then add the "better" versions as part of Num > as was suggested. Anyone who *needs* the precision can then get it by using > the functions directly and forcing a specific type (since I don't think > polymorphic code and this sort of precision demands fit well together). > This way it's *possible* to write code with the required precision for > Float/Double and anyone using Num gets an optional precision boost. > > Cheers, > Merijn > > > On 4 May 2015, at 12:00, Yitzchak Gale > > wrote: > > > > Levent Erkok wrote: > >> ...I think this proposal needs to be shelved for the time being. > > > > Nevertheless, I vote for doing it now. > > > > A better, more featureful, and more principled approach to > > FP is definitely needed. It would be great if we could tackle > > that and finally solve it - and I think we can. But that's a > > huge issue which has been discussed extensively in the > > past, and orthogonal to Levant's proposal. > > > > In the meantime, adding new functions that provide access > > to more FP functionality without adding any significant > > new weirdness are welcome, and will naturally flow into > > whatever future solution to the broader FP issue we > > implement. > > > > It makes little difference whether or not we provide a bad > > but working default implementation; my vote is to > > provide it. It will prevent breakage in case someone > > happens to have implemented a manual RealFloat instance > > out there somewhere, and it won't affect the standard > > instances because we'll provide implementations for > > those. Obviously a clear explanatory Haddock comment > > would be required. Even better, trigger a warning if an > > instance does not provide an explicit implementation, but > > I'm not sure if that's possible. I'm still in favor of doing > > Levant's proposal now even if the consensus is to omit > > the default. > > > > I vote for the usual practice of a human-readable > > name, but don't let bikeshedding hold this back. > > > > Thanks, > > Yitz > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Mon May 4 14:19:30 2015 From: rwbarton at gmail.com (Reid Barton) Date: Mon, 4 May 2015 10:19:30 -0400 Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: <1427631633145-5767835.post@n5.nabble.com> References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: On Sun, Mar 29, 2015 at 8:20 AM, Jeremy wrote: > The proposal to make Semigroup a superclass of Monoid was discussed a while > ago [1], and the conclusion was to "put this off until the dust has settled > from the AMP and FT changes". > > Now that 7.10 is out, I would like to re-propose. The proposed plan is > similar to AMP, but less invasive, as (in my subjective experience) > user-defined Monoids are much less common than user-defined Monads. > > 1. GHC 7.12 will include Semigroup and NonEmpty in base. All Monoid > instances, and anything else which forms a Semigroup, will have a Semigroup > instance. GHC will issue a warning when it encounters an instance of Monoid > which is not an instance of Semigroup. > Strongly opposed to adding a NonEmpty type to base. It's a step in the wrong direction: the problem it clumsily tries to address is solved much better by refinement types ? la LiquidHaskell, which handles this and other whole classes of problems at once. Now, we don't have LiquidHaskell in GHC yet; but let's not settle for adding a NonEmpty type that we know is an inferior approach to base now, when it will likely be very hard to remove it in the future. I know there are some who use NonEmpty types currently, but I think their needs are just as well (if not better) met by putting the type in a small package outside of base with few dependencies. > > 2. GHC >7.12 will define Monoid as a subclass of Semigroup. > While it frustrates me to repeatedly see so much time spent by both GHC developers and Haskell library and application programmers on changes like this with fairly small upside, I don't have any fundamental objection to ending up in a state with Semigroup as a superclass of Monoid. Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Mon May 4 14:20:46 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 4 May 2015 10:20:46 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <1430727263.1930.9.camel@joachim-breitner.de> References: <1430727263.1930.9.camel@joachim-breitner.de> Message-ID: Quite a bit actually. Consider something like: http://hackage.haskell.org/package/ad-4.2.1.1/docs/src/Numeric-AD-Rank1-Newton.html#gradientDescent The step function in there could be trivially adapted to using fused multiplyAdd and precision would just improve. If such a member _were_ in Num, I'd use it in a heartbeat there. If it were in an extra class? I'd have to make a second copy of the function to even try to see the precision win. Most of my numeric code is generic in some fashion, working over vector spaces or simpler number types just as easily. As this proposal has been withdrawn, the point is more or less moot for now. -Edward On Mon, May 4, 2015 at 4:14 AM, Joachim Breitner wrote: > Hi, > > > Am Sonntag, den 03.05.2015, 14:11 -0700 schrieb Levent Erkok: > > > Based on this analysis, I'm withdrawing the original proposal. I think > > fma and other floating-point arithmetic operations are very important > > to support properly, but it should not be done by tacking them on to > > Num or RealFloat; but rather in a new class that also considers > > rounding-mode properly. > > > does it really have to be a class? How much genuinely polymorphic code > is there out there that yet requires this precise handling of precision? > > Have you considered adding it as monomorphic functions fmaDouble, > fmaFloat etc. on hackage, using FFI? Then those who need these functions > can start to use them. > > Furthermore you can start getting the necessary primops supported in > GHC, and have your library transparently use them when available. > > And only then, when we have the implementation in place and actual > users, we can evaluate whether we need an abstract class for this. > > > Greetings, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Mon May 4 15:04:29 2015 From: rwbarton at gmail.com (Reid Barton) Date: Mon, 4 May 2015 11:04:29 -0400 Subject: Ord for partially ordered sets In-Reply-To: References: Message-ID: On Fri, Apr 24, 2015 at 10:23 AM, Ivan Lazar Miljenovic < ivan.miljenovic at gmail.com> wrote: > On 25 April 2015 at 00:01, Henning Thielemann > wrote: > > > > On Fri, 24 Apr 2015, Ivan Lazar Miljenovic wrote: > > > >> Specifically, I have a pull request for fgl [1] to add Ord instances for > >> the graph types (based upon the Ord instances for Data.Map and > Data.IntMap, > >> which I believe are themselves partially ordered), and I'm torn as to > the > >> soundness of adding these instances. > > > > > > In an application we needed to do some combinatorics of graphs and thus > > needed Set Graph. > > > > Nonetheless, I think that graph0 < graph1 should be a type error. We can > > still have a set of Graphs using a newtype. > > This could work; the possible problem would be one of efficiency: if > it's done directly on the graph datatypes they can use the underlying > (ordered) data structure; going purely by the Graph API, there's no > guarantees of ordering and thus it would be needed to call sort, even > though in practice it's redundant. > Not to endorse any particular decision on the topic of the thread, but I'd point out that there is no problem here from a technical point of view. The Graph API can export a function `compareGraphs :: Graph -> Graph -> Ordering` which uses the underlying representation, but not provide an Ord instance which uses it. Then a user of the library can use `compareGraphs` in an Ord instance for a newtype wrapper, or as an argument to functions like `sortBy`. Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From gale at sefer.org Mon May 4 16:11:29 2015 From: gale at sefer.org (Yitzchak Gale) Date: Mon, 4 May 2015 19:11:29 +0300 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> Message-ID: Levent Erkok wrote: >> ...I think this proposal needs to be shelved for the time being. I wrote: > Nevertheless, I vote for doing it now. Edward Kmett wrote: > As this proposal has been withdrawn, the point is more or less > moot for now. OK, let me make myself more clear. I hereby propose the exact same proposal that Levant originally proposed in this thread and then withdrew, with the caveat that the scope of the proposal is explicitly orthogonal to any large scale change to the way we do floating point. Discussion period: 2 weeks, minus time spent so far in this thread since Levant's original proposal. Thanks, Yitz From david.feuer at gmail.com Mon May 4 16:58:46 2015 From: david.feuer at gmail.com (David Feuer) Date: Mon, 4 May 2015 12:58:46 -0400 Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: Wouldn't your concerns about NonEmpty be addressed by keeping its type abstract? Then something like Liquid Haskell could be used to define it better. On May 4, 2015 10:19 AM, "Reid Barton" wrote: > On Sun, Mar 29, 2015 at 8:20 AM, Jeremy wrote: > >> The proposal to make Semigroup a superclass of Monoid was discussed a >> while >> ago [1], and the conclusion was to "put this off until the dust has >> settled >> from the AMP and FT changes". >> >> Now that 7.10 is out, I would like to re-propose. The proposed plan is >> similar to AMP, but less invasive, as (in my subjective experience) >> user-defined Monoids are much less common than user-defined Monads. >> >> 1. GHC 7.12 will include Semigroup and NonEmpty in base. All Monoid >> instances, and anything else which forms a Semigroup, will have a >> Semigroup >> instance. GHC will issue a warning when it encounters an instance of >> Monoid >> which is not an instance of Semigroup. >> > > Strongly opposed to adding a NonEmpty type to base. It's a step in the > wrong direction: > the problem it clumsily tries to address is solved much better by > refinement types ? la > LiquidHaskell, which handles this and other whole classes of problems at > once. > > Now, we don't have LiquidHaskell in GHC yet; but let's not settle for > adding a NonEmpty > type that we know is an inferior approach to base now, when it will likely > be very hard > to remove it in the future. > > I know there are some who use NonEmpty types currently, but I think their > needs are > just as well (if not better) met by putting the type in a small package > outside of base > with few dependencies. > > >> >> 2. GHC >7.12 will define Monoid as a subclass of Semigroup. >> > > While it frustrates me to repeatedly see so much time spent by both GHC > developers > and Haskell library and application programmers on changes like this with > fairly small > upside, I don't have any fundamental objection to ending up in a state > with Semigroup > as a superclass of Monoid. > > Regards, > Reid Barton > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Mon May 4 17:36:42 2015 From: erkokl at gmail.com (Levent Erkok) Date: Mon, 4 May 2015 10:36:42 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> Message-ID: Yitz: Thanks for taking over. I do agree that "fma" can just be added to the Num class with all the ramifications and treated as an "optimization." But that's a different proposal than what I had in mind, so I'm perfectly happy you pursuing this version. Just one comment: The name "FMA" is quite overloaded, and perhaps it should be reserved to the true IEEE754 version. I think someone suggested 'mulAccum' as an alternative, which does make sense if one thinks about the dot-product operation. Please be absolutely clear in the documentation that this is not the IEEE754-fma; but rather a fused-multiply-add operation that is used for the Num class, following some idealized notion of numbers. In particular, the compiler should be free to substitute "a*b+c" with "mulAccum a b c". The latter (i.e., the IEEE754 variant) should be addressed in a different proposal that I intend to work on separately. -Levent. On Mon, May 4, 2015 at 9:11 AM, Yitzchak Gale wrote: > Levent Erkok wrote: > >> ...I think this proposal needs to be shelved for the time being. > > I wrote: > > Nevertheless, I vote for doing it now. > > Edward Kmett wrote: > > As this proposal has been withdrawn, the point is more or less > > moot for now. > > OK, let me make myself more clear. > > I hereby propose the exact same proposal that Levant originally > proposed in this thread and then withdrew, with the caveat that > the scope of the proposal is explicitly orthogonal to any large > scale change to the way we do floating point. > > Discussion period: 2 weeks, minus time spent so far in this > thread since Levant's original proposal. > > Thanks, > Yitz > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yom at artyom.me Mon May 4 17:40:27 2015 From: yom at artyom.me (Artyom) Date: Mon, 04 May 2015 20:40:27 +0300 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> Message-ID: <5547AF0B.2040201@artyom.me> On 05/04/2015 08:36 PM, Levent Erkok wrote: > In particular, the compiler should be free to substitute "a*b+c" with > "mulAccum a b c". But isn't it unacceptable in some cases? For instance, in this case (taken from Wikipedia): > If /x/^2 ? /y/^2 is evaluated as ((/x/?/x/) ? /y/?/y/) using fused > multiply?add, then the result may be negative even when /x/ = /y/ due > to the first multiplication discarding low significance bits. This > could then lead to an error if, for instance, the square root of the > result is then evaluated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Mon May 4 17:48:21 2015 From: erkokl at gmail.com (Levent Erkok) Date: Mon, 4 May 2015 10:48:21 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <1430727263.1930.9.camel@joachim-breitner.de> References: <1430727263.1930.9.camel@joachim-breitner.de> Message-ID: Joachim: I do think that a class is needed. The IEEE754 is actually quite agnostic about floating-point types. What IEEE754 says about floats are the sizes of the exponent and the mantissa; let's call them E and M for short. Then, one can define a floating-point type for each combination of E and M, both of which are at least 2. The resulting type will fit into E+M+1 bits. We have: - "Float" is E=8, M=23. (And thus fits into a 32 bit machine word with the sign bit.) - "Double" is E=11, M=52. (And thus fits into a 64 bit machine word with the sign bit.) (In fact IEEE754 defines single/double precision to have at least those E/M values, but allows for larger. But let's ignore that for a moment.) You can see that the next thing in line is going to be something that fits into 128 bits, also known as quad-precision. (Where E=15, M=112, plus 1 for the sign-bit.) If we get type-literals into Haskell proper, then these types can all be nicely represented as "FP e m" for numbers e, m >= 2. It just happens that Float/Double are what most hardware implementations support "naturally," but all IEEE-754 operations are defined for all precisions, and I think it would make sense to capture this nicely in Haskell, much like we have Int8, Int16, Int32 etc, and have them instances of this new class. So, I'm quite against creating "fmaFloat"/"fmaDouble" etc.; but rather collect all these in a true IEEE754 arithmetic class. Float and Double will be the two instances for today, but one can easily see the extension to other variants in the future. (C already supports long-double to an extent, so that's missing in Haskell; as one sticking point.) This class should also address rounding-modes, as almost all float-operations only make sense in the context of a rounding mode. The design space there is also large, but that's a different discussion. -Levent. On Mon, May 4, 2015 at 1:14 AM, Joachim Breitner wrote: > Hi, > > > Am Sonntag, den 03.05.2015, 14:11 -0700 schrieb Levent Erkok: > > > Based on this analysis, I'm withdrawing the original proposal. I think > > fma and other floating-point arithmetic operations are very important > > to support properly, but it should not be done by tacking them on to > > Num or RealFloat; but rather in a new class that also considers > > rounding-mode properly. > > > does it really have to be a class? How much genuinely polymorphic code > is there out there that yet requires this precise handling of precision? > > Have you considered adding it as monomorphic functions fmaDouble, > fmaFloat etc. on hackage, using FFI? Then those who need these functions > can start to use them. > > Furthermore you can start getting the necessary primops supported in > GHC, and have your library transparently use them when available. > > And only then, when we have the implementation in place and actual > users, we can evaluate whether we need an abstract class for this. > > > Greetings, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Mon May 4 17:49:52 2015 From: erkokl at gmail.com (Levent Erkok) Date: Mon, 4 May 2015 10:49:52 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <5547AF0B.2040201@artyom.me> References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> Message-ID: Artyom: That's precisely the point. The true IEEE754 variants where precision does matter should be part of a different class. What Edward and Yitz want is an "optimized" multiply-add where the semantics is the same but one that goes faster. On Mon, May 4, 2015 at 10:40 AM, Artyom wrote: > On 05/04/2015 08:36 PM, Levent Erkok wrote: > > In particular, the compiler should be free to substitute "a*b+c" with > "mulAccum a b c". > > But isn't it unacceptable in some cases? For instance, in this case (taken > from Wikipedia): > > If *x*2 ? *y*2 is evaluated as ((*x*?*x*) ? *y*?*y*) using fused > multiply?add, then the result may be negative even when *x* = *y* due to > the first multiplication discarding low significance bits. This could then > lead to an error if, for instance, the square root of the result is then > evaluated. > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yom at artyom.me Mon May 4 17:58:21 2015 From: yom at artyom.me (Artyom) Date: Mon, 04 May 2015 20:58:21 +0300 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> Message-ID: <5547B33D.4000205@artyom.me> On 05/04/2015 08:49 PM, Levent Erkok wrote: > Artyom: That's precisely the point. The true IEEE754 variants where > precision does matter should be part of a different class. What Edward > and Yitz want is an "optimized" multiply-add where the semantics is > the same but one that goes faster. No, it looks to me that Edward wants to have a more precise operation in Num: > I'd have to make a second copy of the function to even try to see the > precision win. Unless I'm wrong, you can't have the following things simultaneously: 1. the compiler is free to substitute /a+b*c/ with /mulAdd a b c/ 2. /mulAdd a b c/ is implemented as /fma/ for Doubles (and is more precise) 3. Num operations for Double (addition and multiplication) always conform to IEEE754 > The true IEEE754 variants where precision does matter should be part > of a different class. So, does it mean that you're fine with not having point #3 because people who need it would be able to use a separate class for IEEE754 floats? -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Mon May 4 18:22:53 2015 From: erkokl at gmail.com (Levent Erkok) Date: Mon, 4 May 2015 11:22:53 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <5547B33D.4000205@artyom.me> References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: I think `mulAdd a b c` should be implemented as `a*b+c` even for Double/Float. It should only be an "optmization" (as in modular arithmetic), not a semantic changing operation. Thus justifying the optimization. "fma" should be the "more-precise" version available for Float/Double. I don't think it makes sense to have "fma" for other types. That's why I'm advocating "mulAdd" to be part of "Num" for optimization purposes; and "fma" reserved for true IEEE754 types and semantics. I understand that Edward doesn't like this as this requires a different class; but really, that's the price to pay if we claim Haskell has proper support for IEEE754 semantics. (Which I think it should.) The operation is just different. It also should account for the rounding-modes properly. I think we can pull this off just fine; and Haskell can really lead the pack here. The situation with floats is even worse in other languages. This is our chance to make a proper implementation, and we have the right tools to do so. -Levent. On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: > On 05/04/2015 08:49 PM, Levent Erkok wrote: > > Artyom: That's precisely the point. The true IEEE754 variants where > precision does matter should be part of a different class. What Edward and > Yitz want is an "optimized" multiply-add where the semantics is the same > but one that goes faster. > > No, it looks to me that Edward wants to have a more precise operation in > Num: > > I'd have to make a second copy of the function to even try to see the > precision win. > > Unless I'm wrong, you can't have the following things simultaneously: > > 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* > 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more > precise) > 3. Num operations for Double (addition and multiplication) always > conform to IEEE754 > > The true IEEE754 variants where precision does matter should be part of > a different class. > > So, does it mean that you're fine with not having point #3 because people > who need it would be able to use a separate class for IEEE754 floats? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Mon May 4 18:37:10 2015 From: rwbarton at gmail.com (Reid Barton) Date: Mon, 4 May 2015 14:37:10 -0400 Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: On Mon, May 4, 2015 at 12:58 PM, David Feuer wrote: > Wouldn't your concerns about NonEmpty be addressed by keeping its type > abstract? Then something like Liquid Haskell could be used to define it > better. > There are (at least) two possible designs for a "non-empty list type". 1. A refinement type (as in LiquidHaskell) of [t] whose values include only the non-empty lists. Call it NonEmptyLiquid t. You can pass a NonEmptyLiquid t to a function that expects a [t], and you can pass a [t] to a function that expects a NonEmptyLiquid [t] if the compiler can prove that your [t] is nonempty. If it can't then you can add a runtime test for emptiness and in the non-empty case, the compiler will know the list is non-empty. 2. A new type NonEmptySolid t that is totally unrelated to [t] like you can define in Haskell today. The advantage is that NonEmptySolid is a full-fledged type constructor that can have instances, be passed to other type constructors and so on. The disadvantage is that you need to explicitly convert in your program (and possibly do a runtime conversion also) in either direction between [t] and NonEmptySolid t. I think most people who want a "non-empty list type" want 1, not 2. Option 2 is bad for API design because it forces the users of your library to care exactly as much as you do about non-emptiness. If you imagine adding other sorts of lists like infinite lists, fixed-length or bounded-length lists, lists of even length etc. then it quickly becomes clear that having such an array of incompatible list types is not the way to go. We just want lists to be lists, but we also want the compiler to make sure we don't try to take the head of an empty list. Those who use a NonEmpty type prefer option 2 over option 0 "type NonEmptyGas t = [t] -- and just hope"; but that doesn't mean they prefer option 2 over option 1. Those who really want option 2 can also define it as a newtype wrapper on option 1, as you noted. So, to answer your question, no, it wouldn't really make a difference if the NonEmpty type was abstract. That would just smooth the transition to a design that I think people don't really want. Finally, let me reiterate that there seem to be no advantages to moving a NonEmpty type into base rather than into its own small package. We don't need base to swallow up every small popular package. Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Mon May 4 20:17:43 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 4 May 2015 16:17:43 -0400 Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: The issue with a LiquidHaskell solution in this space, aside from the fact that it is an experiment that isn't part of the compiler or the language that we have, and uses assumptions about the way numbers work that don't hold for the ones we have, is that it is terribly invasive. In order to use it everything that ever wants to work with your shiny new non-empty list type needs to be written in LiquidHaskell to prove the non-empty invariant still holds. Refinement types are notoriously hard to use. On the other-hand a type like NonEmpty satisfies that invariant trivially: There is no empty constructor. The price of this is that it is a different data type, with different operations. I love LiquidHaskell, but I'd be very hesitant to do anything or rather to not do anything predicated on its existence. -Edward On Mon, May 4, 2015 at 2:37 PM, Reid Barton wrote: > On Mon, May 4, 2015 at 12:58 PM, David Feuer > wrote: > >> Wouldn't your concerns about NonEmpty be addressed by keeping its type >> abstract? Then something like Liquid Haskell could be used to define it >> better. >> > There are (at least) two possible designs for a "non-empty list type". > > 1. A refinement type (as in LiquidHaskell) of [t] whose values include > only the non-empty lists. Call it NonEmptyLiquid t. You can pass a > NonEmptyLiquid t to a function that expects a [t], and you can pass a [t] > to a function that expects a NonEmptyLiquid [t] if the compiler can prove > that your [t] is nonempty. If it can't then you can add a runtime test for > emptiness and in the non-empty case, the compiler will know the list is > non-empty. > > 2. A new type NonEmptySolid t that is totally unrelated to [t] like you > can define in Haskell today. The advantage is that NonEmptySolid is a > full-fledged type constructor that can have instances, be passed to other > type constructors and so on. The disadvantage is that you need to > explicitly convert in your program (and possibly do a runtime conversion > also) in either direction between [t] and NonEmptySolid t. > > I think most people who want a "non-empty list type" want 1, not 2. Option > 2 is bad for API design because it forces the users of your library to care > exactly as much as you do about non-emptiness. If you imagine adding other > sorts of lists like infinite lists, fixed-length or bounded-length lists, > lists of even length etc. then it quickly becomes clear that having such an > array of incompatible list types is not the way to go. We just want lists > to be lists, but we also want the compiler to make sure we don't try to > take the head of an empty list. > > Those who use a NonEmpty type prefer option 2 over option 0 "type > NonEmptyGas t = [t] -- and just hope"; but that doesn't mean they prefer > option 2 over option 1. Those who really want option 2 can also define it > as a newtype wrapper on option 1, as you noted. > > So, to answer your question, no, it wouldn't really make a difference if > the NonEmpty type was abstract. That would just smooth the transition to a > design that I think people don't really want. > > Finally, let me reiterate that there seem to be no advantages to moving a > NonEmpty type into base rather than into its own small package. We don't > need base to swallow up every small popular package. > > Regards, > Reid Barton > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.jan.mrotek at gmail.com Mon May 4 20:23:22 2015 From: marcin.jan.mrotek at gmail.com (Marcin Mrotek) Date: Mon, 4 May 2015 22:23:22 +0200 Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: >aside from the fact that it is an experiment that isn't part of the compiler or the language that we have, and uses assumptions about the way numbers work that don't hold for the ones we have Just to throw in my two cents: LiquidHaskell seems to like something that could be eventually wired into GHC with compiler plugins. Best regards, Marcin Mrotek From ezyang at mit.edu Mon May 4 20:38:08 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 04 May 2015 13:38:08 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430681340-sup-8728@sabre> References: <1430500118-sup-420@sabre> <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> <1430681340-sup-8728@sabre> Message-ID: <1430771744-sup-3500@sabre> Excerpts from Edward Z. Yang's message of 2015-05-03 13:22:00 -0700: > https://github.com/suhailshergill/liboleg/blob/57673d01c66ab9f284579a40aed059ed4617ce6c/Data/Symbolic/TypedCodeAux.hs > https://github.com/bitemyapp/hackage-packages/blob/fd9649f426254c0581bd976789a1c384eda0e3c9/liboleg-2010.1.10.0/Data/Symbolic/TypedCodeAux.hs > So, this piece of code is something that really ought to be in the > standard library, if it isn't already. Basically, the problem is > that when you quote an identifier, e.g. [| foo |], you get the > *un-renamed* syntax (a variable foo) rather than the renamed syntax > (somepkg:Data.Foo.foo). This is very useful, too useful to be in > another library. I have to backtrack on this statement; I misinterpreted what this code is doing; the problem here is we need a Lift instance for the Exp data type. We should just add this instance (or use the Data generated one). Edward From lemming at henning-thielemann.de Mon May 4 20:50:05 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Mon, 4 May 2015 22:50:05 +0200 (CEST) Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: On Mon, 4 May 2015, Edward Kmett wrote: > The issue with a LiquidHaskell solution in this space, aside from the fact that it is an experiment that isn't > part of the compiler or the language that we have, and uses assumptions about the way numbers work that don't > hold for the ones we have, is that it is terribly invasive. > In order to use it everything that ever wants to work with your shiny new non-empty list type needs to be > written in LiquidHaskell to prove the non-empty invariant still holds. Refinement types are notoriously hard > to use. On the other-hand a type like NonEmpty satisfies that invariant trivially: There is no empty > constructor. > > The price of this is that it is a different data type, with different operations. In my experience with my non-empty package I found that I can lift that restriction by using type classes like Functor, Traversable etc. > I love LiquidHaskell, but I'd be very hesitant to do anything or rather > to not do anything predicated on its existence. Although LiquidHaskell is cool, I also prefer to solve simple problems with simple tools. Non-empty lists do not need a sophisticated prover framework, it can be solved with simple Haskell 98. From ezyang at mit.edu Mon May 4 20:58:52 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 04 May 2015 13:58:52 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <5DCCF1B4-D0C6-4041-86D0-DD898D7EA0FD@cis.upenn.edu> References: <1430500118-sup-420@sabre> <5DCCF1B4-D0C6-4041-86D0-DD898D7EA0FD@cis.upenn.edu> Message-ID: <1430772891-sup-1980@sabre> Hello Richard, Thanks for the comments. I've created a wiki page here: https://ghc.haskell.org/trac/ghc/wiki/TemplateHaskell/PackageKeyChanges > packageVersionString :: Package -> String -- maybe use a newtype here? or the proper Cabal datatype? > packageName :: Package -> String -- use the old PkgName here? or is that confusing? > > packageDependencies :: Package -> Q [Package] > -- could be very useful in printing debugging information. Then, if you're writing a library that > -- is sensitive to dependency versions, you can print this info out when you panic OK, we can plan on adding this information. > I'm -1 on #3 if it will break code. You can already get current > package information through the `qLocation` function. This is true. However, qLocation returns a String. Should we change it to return a PkgKey? (Could break users of the information.) I don't know know if the name and version should get newtyped. If we do newtype them, should qLocation also be updated accordingly? Thanks, Edward From carter.schonwald at gmail.com Tue May 5 02:54:03 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 4 May 2015 22:54:03 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: pardon the wall of text everyone, but I really want some FMA tooling :) I am going to spend some time later this week and next adding FMA primops to GHC and playing around with different ways to add it to Num (which seems pretty straightforward, though I think we'd all agree it shouldn't be exported by Prelude). And then depending on how Yitzchak's reproposal of that exactly goes (or some iteration thereof) we can get something useful/usable into 7.12 i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact numbers*, and a higher precision FMA for *approximate numbers *(*ie floating point*), and where I cant sanely use FMA if it lives anywhere but Num unless I rub typeable everywhere and do runtime type checks for applicable floating point types, which kinda destroys parametrically in engineering nice things. @levent: ghc doesn't do any optimization for floating point arithmetic (aside from 1-2 very simple things that are possibly questionable), and until ghc has support for precisly emulating high precision floating point computation in a portable way, probably wont have any interesting floating point computation. Mandating that fma a b c === a*b+c for inexact number datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc is conservative about optimizing floating point, because it makes doing correct stability analyses tractable! I look forward to the day that GHC gets a bit more sophisticated about optimizing floating point computation, but that day is still a ways off. relatedly: FMA for float and double are not generally going to be faster than the individual primitive operations, merely more accurate when used carefully. point being*, i'm +1 on adding some manner of FMA operations to Num* (only sane place to put it where i can actually use it for a general use library) and i dont really care if we name it fusedMultiplyAdd, multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor "fusedMultiplyAdd" if we want a descriptive name that will be familiar to experts yet easy to google for the curious. to repeat: i'm going to do some leg work so that the double and float prims are portably exposed by ghc-prims (i've spoken with several ghc devs about that, and they agree to its value, and thats a decision outside of scope of the libraries purview), and I do hope we can to a consensus about putting it in Num so that expert library authors can upgrade the guarantees that they can provide end users without imposing any breaking changes to end users. A number of folks have brought up "but Num is broken" as a counter argument to adding FMA support to Num. I emphatically agree num is borken :), BUT! I do also believe that fixing up Num prelude has the burden of providing a whole cloth design for an alternative design that we can get broad consensus/adoption with. That will happen by dint of actually experimentation and usage. Point being, adding FMA doesn't further entrench current Num any more than it already is, it just provides expert library authors with a transparent way of improving the experience of their users with a free upgrade in answer accuracy if used carefully. Additionally, when Num's "semiring ish equational laws" are framed with respect to approximate forwards/backwards stability, there is a perfectly reasonable law for FMA. I am happy to spend some time trying to write that up more precisely IFF that will tilt those in opposition to being in favor. I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as a method therein for Num. If that constitutes a different and *more palatable proposal* than what people have articulated so far (by discouraging casual use by dint of hiding) then I am happy to kick off a new thread with that concrete design choice. If theres a counter argument thats a bit more substantive than "Num is for exact arithmetic" or "Num is wrong" that will sway me to the other side, i'm all ears, but i'm skeptical of that. I emphatically support those who are displeased with Num to prototype some alternative designs in userland, I do think it'd be great to figure out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 years, but again any such proposal really needs to be realized whole cloth before it makes its way to being a libraries list proposal. again, pardon the wall of text, i just really want to have nice things :) -Carter On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: > I think `mulAdd a b c` should be implemented as `a*b+c` even for > Double/Float. It should only be an "optmization" (as in modular > arithmetic), not a semantic changing operation. Thus justifying the > optimization. > > "fma" should be the "more-precise" version available for Float/Double. I > don't think it makes sense to have "fma" for other types. That's why I'm > advocating "mulAdd" to be part of "Num" for optimization purposes; and > "fma" reserved for true IEEE754 types and semantics. > > I understand that Edward doesn't like this as this requires a different > class; but really, that's the price to pay if we claim Haskell has proper > support for IEEE754 semantics. (Which I think it should.) The operation is > just different. It also should account for the rounding-modes properly. > > I think we can pull this off just fine; and Haskell can really lead the > pack here. The situation with floats is even worse in other languages. This > is our chance to make a proper implementation, and we have the right tools > to do so. > > -Levent. > > On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: > >> On 05/04/2015 08:49 PM, Levent Erkok wrote: >> >> Artyom: That's precisely the point. The true IEEE754 variants where >> precision does matter should be part of a different class. What Edward and >> Yitz want is an "optimized" multiply-add where the semantics is the same >> but one that goes faster. >> >> No, it looks to me that Edward wants to have a more precise operation in >> Num: >> >> I'd have to make a second copy of the function to even try to see the >> precision win. >> >> Unless I'm wrong, you can't have the following things simultaneously: >> >> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more >> precise) >> 3. Num operations for Double (addition and multiplication) always >> conform to IEEE754 >> >> The true IEEE754 variants where precision does matter should be part of >> a different class. >> >> So, does it mean that you're fine with not having point #3 because people >> who need it would be able to use a separate class for IEEE754 floats? >> >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Tue May 5 03:36:55 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 04 May 2015 20:36:55 -0700 Subject: Proposal: liftData for Template Haskell In-Reply-To: <1429269330-sup-7487@sabre> References: <1429269330-sup-7487@sabre> Message-ID: <1430796774-sup-3968@sabre> Hello all, It looks like people are opposed to doing with the lift type-class. So here is a counterproposal: mark the Lift type class as overlappable, and define an instance: instance Data a => Lift a where ... This is fairly desirable, since GHC will sometimes generate a call to 'lift', in which case liftData can't be manually filled in. People can still define efficient versions of lift. Edward Excerpts from Edward Z. Yang's message of 2015-04-17 04:21:16 -0700: > I propose adding the following function to Language.Haskell.TH: > > -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which > -- works for any type with a 'Data' instance. > liftData :: Data a => a -> Q Exp > liftData = dataToExpQ (const Nothing) > > I don't really know which submodule this should come from; > since it uses 'dataToExpQ', you might put it in Language.Haskell.TH.Quote > but arguably 'dataToExpQ' doesn't belong in this module either, > and it only lives there because it is a useful function for defining > quasiquoters and it was described in the quasiquoting paper. > > I might propose getting rid of the 'Lift' class entirely, but you > might prefer that class since it doesn't go through SYB (and have > the attendant slowdown). > > This mode of use of 'dataToExpQ' deserves more attention. > > Discussion period: 1 month > > Cheers, > Edward From erkokl at gmail.com Tue May 5 04:54:23 2015 From: erkokl at gmail.com (Levent Erkok) Date: Mon, 4 May 2015 21:54:23 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Carter: Wall of text is just fine! I'm personally happy to see the results of your experiment. In particular, the better "code-generation" facilities you add around floats/doubles that map to the underlying hardware's native instructions, the better. When we do have proper IEEE floats, we shall surely need all that functionality. While you're working on this, if you can also watch out for how rounding modes can be integrated into the operations, that would be useful as well. I can see at least two designs: * One where the rounding mode goes with the operation: `fpAdd RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the functional solution, but could get quite verbose; and might be costly if the implementation changes the rounding-mode at every issue. * The other is where the operations simply assume the RoundNearestTiesToEven, but we have lifted IO versions that can be modified with a "with" like construct: `withRoundingMode RoundTowardsPositive $ fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* `fpAdd` as before) will have to return some sort of a monadic value (probably in the IO monad) since it'll need to access the rounding mode currently active. Neither choice jumps out at me as the best one; and a hybrid might also be possible. I'd love to hear any insight you gain regarding rounding-modes during your experiment. -Levent. On Mon, May 4, 2015 at 7:54 PM, Carter Schonwald wrote: > pardon the wall of text everyone, but I really want some FMA tooling :) > > I am going to spend some time later this week and next adding FMA primops > to GHC and playing around with different ways to add it to Num (which seems > pretty straightforward, though I think we'd all agree it shouldn't be > exported by Prelude). And then depending on how Yitzchak's reproposal of > that exactly goes (or some iteration thereof) we can get something > useful/usable into 7.12 > > i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact > numbers*, and a higher precision FMA for *approximate numbers *(*ie > floating point*), and where I cant sanely use FMA if it lives anywhere > but Num unless I rub typeable everywhere and do runtime type checks for > applicable floating point types, which kinda destroys parametrically in > engineering nice things. > > @levent: ghc doesn't do any optimization for floating point arithmetic > (aside from 1-2 very simple things that are possibly questionable), and > until ghc has support for precisly emulating high precision floating point > computation in a portable way, probably wont have any interesting floating > point computation. Mandating that fma a b c === a*b+c for inexact number > datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc > is conservative about optimizing floating point, because it makes doing > correct stability analyses tractable! I look forward to the day that GHC > gets a bit more sophisticated about optimizing floating point computation, > but that day is still a ways off. > > relatedly: FMA for float and double are not generally going to be faster > than the individual primitive operations, merely more accurate when used > carefully. > > point being*, i'm +1 on adding some manner of FMA operations to Num* > (only sane place to put it where i can actually use it for a general use > library) and i dont really care if we name it fusedMultiplyAdd, > multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor > "fusedMultiplyAdd" if we want a descriptive name that will be familiar to > experts yet easy to google for the curious. > > to repeat: i'm going to do some leg work so that the double and float > prims are portably exposed by ghc-prims (i've spoken with several ghc devs > about that, and they agree to its value, and thats a decision outside of > scope of the libraries purview), and I do hope we can to a consensus about > putting it in Num so that expert library authors can upgrade the guarantees > that they can provide end users without imposing any breaking changes to > end users. > > A number of folks have brought up "but Num is broken" as a counter > argument to adding FMA support to Num. I emphatically agree num is borken > :), BUT! I do also believe that fixing up Num prelude has the burden of > providing a whole cloth design for an alternative design that we can get > broad consensus/adoption with. That will happen by dint of actually > experimentation and usage. > > Point being, adding FMA doesn't further entrench current Num any more than > it already is, it just provides expert library authors with a transparent > way of improving the experience of their users with a free upgrade in > answer accuracy if used carefully. Additionally, when Num's "semiring ish > equational laws" are framed with respect to approximate forwards/backwards > stability, there is a perfectly reasonable law for FMA. I am happy to spend > some time trying to write that up more precisely IFF that will tilt those > in opposition to being in favor. > > I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as a > method therein for Num. If that constitutes a different and *more > palatable proposal* than what people have articulated so far (by > discouraging casual use by dint of hiding) then I am happy to kick off a > new thread with that concrete design choice. > > If theres a counter argument thats a bit more substantive than "Num is for > exact arithmetic" or "Num is wrong" that will sway me to the other side, > i'm all ears, but i'm skeptical of that. > > I emphatically support those who are displeased with Num to prototype some > alternative designs in userland, I do think it'd be great to figure out a > new Num prelude we can migrate Haskell / GHC to over the next 2-5 years, > but again any such proposal really needs to be realized whole cloth before > it makes its way to being a libraries list proposal. > > > again, pardon the wall of text, i just really want to have nice things :) > -Carter > > > On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: > >> I think `mulAdd a b c` should be implemented as `a*b+c` even for >> Double/Float. It should only be an "optmization" (as in modular >> arithmetic), not a semantic changing operation. Thus justifying the >> optimization. >> >> "fma" should be the "more-precise" version available for Float/Double. I >> don't think it makes sense to have "fma" for other types. That's why I'm >> advocating "mulAdd" to be part of "Num" for optimization purposes; and >> "fma" reserved for true IEEE754 types and semantics. >> >> I understand that Edward doesn't like this as this requires a different >> class; but really, that's the price to pay if we claim Haskell has proper >> support for IEEE754 semantics. (Which I think it should.) The operation is >> just different. It also should account for the rounding-modes properly. >> >> I think we can pull this off just fine; and Haskell can really lead the >> pack here. The situation with floats is even worse in other languages. This >> is our chance to make a proper implementation, and we have the right tools >> to do so. >> >> -Levent. >> >> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >> >>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>> >>> Artyom: That's precisely the point. The true IEEE754 variants where >>> precision does matter should be part of a different class. What Edward and >>> Yitz want is an "optimized" multiply-add where the semantics is the same >>> but one that goes faster. >>> >>> No, it looks to me that Edward wants to have a more precise operation in >>> Num: >>> >>> I'd have to make a second copy of the function to even try to see the >>> precision win. >>> >>> Unless I'm wrong, you can't have the following things simultaneously: >>> >>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more >>> precise) >>> 3. Num operations for Double (addition and multiplication) always >>> conform to IEEE754 >>> >>> The true IEEE754 variants where precision does matter should be part >>> of a different class. >>> >>> So, does it mean that you're fine with not having point #3 because >>> people who need it would be able to use a separate class for IEEE754 floats? >>> >>> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue May 5 07:58:08 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 5 May 2015 07:58:08 +0000 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430500118-sup-420@sabre> References: <1430500118-sup-420@sabre> Message-ID: <720d55eae7294950945c442b3c6b70bd@DB4PR30MB030.064d.mgd.msft.net> Why do we need two types: PkgKey and Package? Can't we make them the same? It's good that Package be abstract (as proposed below). Then we are free to add more metadata to it I'd urge that Module too should be abstract, for the same reason Simon | -----Original Message----- | From: Libraries [mailto:libraries-bounces at haskell.org] On Behalf Of | Edward Z. Yang | Sent: 01 May 2015 18:09 | To: libraries | Subject: Template Haskell changes to names and package keys | | In GHC 7.10, we changed the internal representation of names to be | based on package keys (base_XXXXXX) rather than package IDs (base- | 4.7.0.1), however, we forgot to update the Template Haskell API to | track these changes. This lead to some bugs in TH code which was | synthesizing names by using package name and version directly, e.g. | https://ghc.haskell.org/trac/ghc/ticket/10279 | | We now propose the following changes to the TH API in order to track | these changes: | | 1. Currently, a TH NameG contains a PkgName, defined as: | | newtype PkgName = PkgName String | | This is badly misleading, even in the old world order, since | these needed version numbers as well. We propose that this be | renamed to PkgKey: | | newtype PkgKey = PkgKey String | mkPackageKey :: String -> PackageKey | mkPackageKey = PkgKey | | 2. Package keys are somewhat hard to synthesize, so we also | offer an API for querying the package database of the GHC which | is compiling your code for information about packages. So, | we introduce a new abstract data type: | | data Package | packageKey :: Package -> PkgKey | | and some functions for getting packages: | | searchPackage :: String -- Package name | -> String -- Version | -> Q [Package] | | reifyPackage :: PkgKey -> Q Package | | We could add other functions (e.g., return all packages with a | package name). | | 3. Commonly, a user wants to get the package key of the current | package. Following Simon's suggestion, this will be done by | augmenting ModuleInfo: | | data ModuleInfo = | ModuleInfo { mi_this_mod :: Module -- new | , mi_imports :: [Module] } | | We'll also add a function for accessing the module package key: | | modulePackageKey :: Module -> PkgKey | | And a convenience function for accessing the current module: | | thisPackageKey :: Q PkgKey | thisPackageKey = fmap (modulePackageKey . mi_this_mod) | qReifyModule | | thisPackage :: Q Package | thisPackage = reifyPackage =<< thisPackageKey | | Discussion period: 1 month | | Thanks, | Edward | | (apologies to cc'd folks, I sent from my wrong email address) | _______________________________________________ | Libraries mailing list | Libraries at haskell.org | http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From simonpj at microsoft.com Tue May 5 08:15:50 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 5 May 2015 08:15:50 +0000 Subject: Template Haskell changes to names and package keys In-Reply-To: References: <1430500118-sup-420@sabre> <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> Message-ID: <50ae1abd4ebf4c54ad679f4a5b74e7ca@DB4PR30MB030.064d.mgd.msft.net> I didn't search for that, since I was searching for explicit uses of PkgName. The results for NameG are pretty interesting, though: https://github.com/search?l=haskell&q=NameG&type=Code&utf8=%E2%9C%93 It seems like most of the results are either `Lift`ing a PkgName or re-using a PkgName from another name. However, there are a fair number of uses which explicitly construct the name, though. In particular, there are a fair number of usages of this to generate references to the tuple constructors in GHC.Prim. Very good exercise! Looking for how an existing API is used (perhaps in a clumsy way, because of the inadequacies of the existing API) is a good guide to improving it. e.g. If tuple names are an issue, let?s provide a TH API for getting their names!! Simon From: Libraries [mailto:libraries-bounces at haskell.org] On Behalf Of Michael Sloan Sent: 02 May 2015 23:46 To: Bertram Felgenhauer Cc: Haskell Libraries Subject: Re: Template Haskell changes to names and package keys On Sat, May 2, 2015 at 2:38 AM, Bertram Felgenhauer > wrote: Hi Michael, > Being able to get all the packages that have a particular name sounds > good! Note that if "-hide-package" / "-hide-all-packages" is used, I > wouldn't want those packages in the results. Same here. > Like Dan, I'm a little less keen on #1. Here's why: > [...] > > * I couldn't find any examples of its usage that would be broken by this > new semantics. I've done a search on github[1] to find usages of PkgName, > and I only found one use[2] that uses PkgName which would be affected by > this change. This example is in the TH lib itself, and so would hopefully > be fixed by this change. Did you find any uses where an existing PkgName is reused to construct a new name? I believe this would be an important data point for deciding whether renaming the constructor is a good idea or not. Basically I'm asking whether the majority of uses of PkgName "out in the wild" right now is correct or incorrect. I didn't search for that, since I was searching for explicit uses of PkgName. The results for NameG are pretty interesting, though: https://github.com/search?l=haskell&q=NameG&type=Code&utf8=%E2%9C%93 It seems like most of the results are either `Lift`ing a PkgName or re-using a PkgName from another name. However, there are a fair number of uses which explicitly construct the name, though. In particular, there are a fair number of usages of this to generate references to the tuple constructors in GHC.Prim. Interestingly, these usages of (mkPkgName "ghc-prim") don't have version numbers at all. This implies that the behavior is to select the version of the package which is currently being used. Considering how much more convenient this is than manually looking up package names, I'm strongly in favor of keeping PkgName and its current behavior, and extending it to handle package keys. Ideally, also documenting the different formats it expects. In particular, "name", "name-version", and "name-version-key" (I'm guessing). The issue here is that we may be linking against two different packages which are named the same thing, right? This was always a possibility, and having package keys just makes it so that we need the ability to disambiguate packages beyond version numbers. So, how about for this rare-ish case where there we're linking against two different versions of the foo package, and the TH manually generates a (PkgName "foo"), we generate an ambiguity error. If we force users to search for and specify fully unambiguous package names then that pushes the boilerplate of package selection out to TH usage. It seems like pretty much everyone will want "select the version of the package that I'm using." That said, I'm also in favor of adding the function mentioned in #2, for those cases where the default isn't sufficient (I'm anticipating very few of these). -Michael Cheers, Bertram _______________________________________________ Libraries mailing list Libraries at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From 2haskell at pkturner.org Tue May 5 11:22:41 2015 From: 2haskell at pkturner.org (Scott Turner) Date: Tue, 05 May 2015 07:22:41 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: <5548A801.2070205@pkturner.org> On 2015-05-05 00:54, Levent Erkok wrote: > I can see at least two designs: > > * One where the rounding mode goes with the operation: `fpAdd > RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the > functional solution, but could get quite verbose; and might be costly > if the implementation changes the rounding-mode at every issue. > > * The other is where the operations simply assume the > RoundNearestTiesToEven, but we have lifted IO versions that can be > modified with a "with" like construct: `withRoundingMode > RoundTowardsPositive $ fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* > `fpAdd` as before) will have to return some sort of a monadic value > (probably in the IO monad) since it'll need to access the rounding > mode currently active. > > Neither choice jumps out at me as the best one; and a hybrid might > also be possible. I'd love to hear any insight you gain regarding > rounding-modes during your experiment. The monadic alternative is more readily extensible to handle IEEE 754's sticky flags: inexact, overflow, underflow, divide-by-zero, and invalid. From carter.schonwald at gmail.com Tue May 5 11:40:37 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 5 May 2015 07:40:37 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hey Levent, I actually looked into how to do rounding mode setting a while ago, and the conclusion I came to is that those can simply be ffi calls at the top level that do a sort of with mode bracketing. Or at least I'm not sure if setting the mode in an inner loop is a good idea. That said, you are making a valid point, and I will investigate to what extent compiler support is useful for the latter. If bracketed mode setting and unsetting has a small enough performance overhead, adding support in ghc primops would be worth while. Note that those primops would have to be modeled as doing something thats like io or st, so that when mode switches happen can be predictable. Otherwise CSE and related optimizations could result in evaluating the same code in the wrong mode. I'll think through how that can be avoided, as I do have some ideas. I suspect mode switching code will wind up using new type wrapped floats and doubles that have a phantom index for the mode, and something like "runWithModeFoo:: Num a => Mode m->(forall s . Moded s a ) -> a" to make sure mode choices happen predictably. That said, there might be a better approach that we'll come to after some experimenting On May 5, 2015 12:54 AM, "Levent Erkok" wrote: > Carter: Wall of text is just fine! > > I'm personally happy to see the results of your experiment. In particular, > the better "code-generation" facilities you add around floats/doubles that > map to the underlying hardware's native instructions, the better. When we > do have proper IEEE floats, we shall surely need all that functionality. > > While you're working on this, if you can also watch out for how rounding > modes can be integrated into the operations, that would be useful as well. > I can see at least two designs: > > * One where the rounding mode goes with the operation: `fpAdd > RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the functional > solution, but could get quite verbose; and might be costly if the > implementation changes the rounding-mode at every issue. > > * The other is where the operations simply assume the > RoundNearestTiesToEven, but we have lifted IO versions that can be modified > with a "with" like construct: `withRoundingMode RoundTowardsPositive $ > fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* `fpAdd` as before) will have > to return some sort of a monadic value (probably in the IO monad) since > it'll need to access the rounding mode currently active. > > Neither choice jumps out at me as the best one; and a hybrid might also be > possible. I'd love to hear any insight you gain regarding rounding-modes > during your experiment. > > -Levent. > > On Mon, May 4, 2015 at 7:54 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> pardon the wall of text everyone, but I really want some FMA tooling :) >> >> I am going to spend some time later this week and next adding FMA primops >> to GHC and playing around with different ways to add it to Num (which seems >> pretty straightforward, though I think we'd all agree it shouldn't be >> exported by Prelude). And then depending on how Yitzchak's reproposal of >> that exactly goes (or some iteration thereof) we can get something >> useful/usable into 7.12 >> >> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >> numbers*, and a higher precision FMA for *approximate numbers *(*ie >> floating point*), and where I cant sanely use FMA if it lives anywhere >> but Num unless I rub typeable everywhere and do runtime type checks for >> applicable floating point types, which kinda destroys parametrically in >> engineering nice things. >> >> @levent: ghc doesn't do any optimization for floating point arithmetic >> (aside from 1-2 very simple things that are possibly questionable), and >> until ghc has support for precisly emulating high precision floating point >> computation in a portable way, probably wont have any interesting floating >> point computation. Mandating that fma a b c === a*b+c for inexact number >> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >> is conservative about optimizing floating point, because it makes doing >> correct stability analyses tractable! I look forward to the day that GHC >> gets a bit more sophisticated about optimizing floating point computation, >> but that day is still a ways off. >> >> relatedly: FMA for float and double are not generally going to be faster >> than the individual primitive operations, merely more accurate when used >> carefully. >> >> point being*, i'm +1 on adding some manner of FMA operations to Num* >> (only sane place to put it where i can actually use it for a general use >> library) and i dont really care if we name it fusedMultiplyAdd, >> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >> experts yet easy to google for the curious. >> >> to repeat: i'm going to do some leg work so that the double and float >> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >> about that, and they agree to its value, and thats a decision outside of >> scope of the libraries purview), and I do hope we can to a consensus about >> putting it in Num so that expert library authors can upgrade the guarantees >> that they can provide end users without imposing any breaking changes to >> end users. >> >> A number of folks have brought up "but Num is broken" as a counter >> argument to adding FMA support to Num. I emphatically agree num is borken >> :), BUT! I do also believe that fixing up Num prelude has the burden of >> providing a whole cloth design for an alternative design that we can get >> broad consensus/adoption with. That will happen by dint of actually >> experimentation and usage. >> >> Point being, adding FMA doesn't further entrench current Num any more >> than it already is, it just provides expert library authors with a >> transparent way of improving the experience of their users with a free >> upgrade in answer accuracy if used carefully. Additionally, when Num's >> "semiring ish equational laws" are framed with respect to approximate >> forwards/backwards stability, there is a perfectly reasonable law for FMA. >> I am happy to spend some time trying to write that up more precisely IFF >> that will tilt those in opposition to being in favor. >> >> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as >> a method therein for Num. If that constitutes a different and *more >> palatable proposal* than what people have articulated so far (by >> discouraging casual use by dint of hiding) then I am happy to kick off a >> new thread with that concrete design choice. >> >> If theres a counter argument thats a bit more substantive than "Num is >> for exact arithmetic" or "Num is wrong" that will sway me to the other >> side, i'm all ears, but i'm skeptical of that. >> >> I emphatically support those who are displeased with Num to prototype >> some alternative designs in userland, I do think it'd be great to figure >> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >> years, but again any such proposal really needs to be realized whole cloth >> before it makes its way to being a libraries list proposal. >> >> >> again, pardon the wall of text, i just really want to have nice things :) >> -Carter >> >> >> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: >> >>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>> Double/Float. It should only be an "optmization" (as in modular >>> arithmetic), not a semantic changing operation. Thus justifying the >>> optimization. >>> >>> "fma" should be the "more-precise" version available for Float/Double. I >>> don't think it makes sense to have "fma" for other types. That's why I'm >>> advocating "mulAdd" to be part of "Num" for optimization purposes; and >>> "fma" reserved for true IEEE754 types and semantics. >>> >>> I understand that Edward doesn't like this as this requires a different >>> class; but really, that's the price to pay if we claim Haskell has proper >>> support for IEEE754 semantics. (Which I think it should.) The operation is >>> just different. It also should account for the rounding-modes properly. >>> >>> I think we can pull this off just fine; and Haskell can really lead the >>> pack here. The situation with floats is even worse in other languages. This >>> is our chance to make a proper implementation, and we have the right tools >>> to do so. >>> >>> -Levent. >>> >>> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >>> >>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>> >>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>> precision does matter should be part of a different class. What Edward and >>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>> but one that goes faster. >>>> >>>> No, it looks to me that Edward wants to have a more precise operation >>>> in Num: >>>> >>>> I'd have to make a second copy of the function to even try to see the >>>> precision win. >>>> >>>> Unless I'm wrong, you can't have the following things simultaneously: >>>> >>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more >>>> precise) >>>> 3. Num operations for Double (addition and multiplication) always >>>> conform to IEEE754 >>>> >>>> The true IEEE754 variants where precision does matter should be part >>>> of a different class. >>>> >>>> So, does it mean that you're fine with not having point #3 because >>>> people who need it would be able to use a separate class for IEEE754 floats? >>>> >>>> >>> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue May 5 12:16:33 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 5 May 2015 08:16:33 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: To clarify: I think theres a bit of an open design question how the explicitly moded api would look. I'd suspect it'll look somewhat like Ed's AD lib, and should be in a userland library I think. On May 5, 2015 7:40 AM, "Carter Schonwald" wrote: > Hey Levent, > I actually looked into how to do rounding mode setting a while ago, and > the conclusion I came to is that those can simply be ffi calls at the top > level that do a sort of with mode bracketing. Or at least I'm not sure if > setting the mode in an inner loop is a good idea. > > That said, you are making a valid point, and I will investigate to what > extent compiler support is useful for the latter. If bracketed mode setting > and unsetting has a small enough performance overhead, adding support in > ghc primops would be worth while. Note that those primops would have to be > modeled as doing something thats like io or st, so that when mode switches > happen can be predictable. Otherwise CSE and related optimizations could > result in evaluating the same code in the wrong mode. I'll think through > how that can be avoided, as I do have some ideas. > > I suspect mode switching code will wind up using new type wrapped floats > and doubles that have a phantom index for the mode, and something like > "runWithModeFoo:: Num a => Mode m->(forall s . Moded s a ) -> a" to > make sure mode choices happen predictably. That said, there might be a > better approach that we'll come to after some experimenting > On May 5, 2015 12:54 AM, "Levent Erkok" wrote: > >> Carter: Wall of text is just fine! >> >> I'm personally happy to see the results of your experiment. In >> particular, the better "code-generation" facilities you add around >> floats/doubles that map to the underlying hardware's native instructions, >> the better. When we do have proper IEEE floats, we shall surely need all >> that functionality. >> >> While you're working on this, if you can also watch out for how rounding >> modes can be integrated into the operations, that would be useful as well. >> I can see at least two designs: >> >> * One where the rounding mode goes with the operation: `fpAdd >> RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the functional >> solution, but could get quite verbose; and might be costly if the >> implementation changes the rounding-mode at every issue. >> >> * The other is where the operations simply assume the >> RoundNearestTiesToEven, but we have lifted IO versions that can be modified >> with a "with" like construct: `withRoundingMode RoundTowardsPositive $ >> fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* `fpAdd` as before) will have >> to return some sort of a monadic value (probably in the IO monad) since >> it'll need to access the rounding mode currently active. >> >> Neither choice jumps out at me as the best one; and a hybrid might also >> be possible. I'd love to hear any insight you gain regarding rounding-modes >> during your experiment. >> >> -Levent. >> >> On Mon, May 4, 2015 at 7:54 PM, Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> pardon the wall of text everyone, but I really want some FMA tooling :) >>> >>> I am going to spend some time later this week and next adding FMA >>> primops to GHC and playing around with different ways to add it to Num >>> (which seems pretty straightforward, though I think we'd all agree it >>> shouldn't be exported by Prelude). And then depending on how Yitzchak's >>> reproposal of that exactly goes (or some iteration thereof) we can get >>> something useful/usable into 7.12 >>> >>> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >>> numbers*, and a higher precision FMA for *approximate numbers *(*ie >>> floating point*), and where I cant sanely use FMA if it lives anywhere >>> but Num unless I rub typeable everywhere and do runtime type checks for >>> applicable floating point types, which kinda destroys parametrically in >>> engineering nice things. >>> >>> @levent: ghc doesn't do any optimization for floating point arithmetic >>> (aside from 1-2 very simple things that are possibly questionable), and >>> until ghc has support for precisly emulating high precision floating point >>> computation in a portable way, probably wont have any interesting floating >>> point computation. Mandating that fma a b c === a*b+c for inexact number >>> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >>> is conservative about optimizing floating point, because it makes doing >>> correct stability analyses tractable! I look forward to the day that GHC >>> gets a bit more sophisticated about optimizing floating point computation, >>> but that day is still a ways off. >>> >>> relatedly: FMA for float and double are not generally going to be faster >>> than the individual primitive operations, merely more accurate when used >>> carefully. >>> >>> point being*, i'm +1 on adding some manner of FMA operations to Num* >>> (only sane place to put it where i can actually use it for a general use >>> library) and i dont really care if we name it fusedMultiplyAdd, >>> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >>> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >>> experts yet easy to google for the curious. >>> >>> to repeat: i'm going to do some leg work so that the double and float >>> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >>> about that, and they agree to its value, and thats a decision outside of >>> scope of the libraries purview), and I do hope we can to a consensus about >>> putting it in Num so that expert library authors can upgrade the guarantees >>> that they can provide end users without imposing any breaking changes to >>> end users. >>> >>> A number of folks have brought up "but Num is broken" as a counter >>> argument to adding FMA support to Num. I emphatically agree num is borken >>> :), BUT! I do also believe that fixing up Num prelude has the burden of >>> providing a whole cloth design for an alternative design that we can get >>> broad consensus/adoption with. That will happen by dint of actually >>> experimentation and usage. >>> >>> Point being, adding FMA doesn't further entrench current Num any more >>> than it already is, it just provides expert library authors with a >>> transparent way of improving the experience of their users with a free >>> upgrade in answer accuracy if used carefully. Additionally, when Num's >>> "semiring ish equational laws" are framed with respect to approximate >>> forwards/backwards stability, there is a perfectly reasonable law for FMA. >>> I am happy to spend some time trying to write that up more precisely IFF >>> that will tilt those in opposition to being in favor. >>> >>> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as >>> a method therein for Num. If that constitutes a different and *more >>> palatable proposal* than what people have articulated so far (by >>> discouraging casual use by dint of hiding) then I am happy to kick off a >>> new thread with that concrete design choice. >>> >>> If theres a counter argument thats a bit more substantive than "Num is >>> for exact arithmetic" or "Num is wrong" that will sway me to the other >>> side, i'm all ears, but i'm skeptical of that. >>> >>> I emphatically support those who are displeased with Num to prototype >>> some alternative designs in userland, I do think it'd be great to figure >>> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >>> years, but again any such proposal really needs to be realized whole cloth >>> before it makes its way to being a libraries list proposal. >>> >>> >>> again, pardon the wall of text, i just really want to have nice things >>> :) >>> -Carter >>> >>> >>> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: >>> >>>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>>> Double/Float. It should only be an "optmization" (as in modular >>>> arithmetic), not a semantic changing operation. Thus justifying the >>>> optimization. >>>> >>>> "fma" should be the "more-precise" version available for Float/Double. >>>> I don't think it makes sense to have "fma" for other types. That's why I'm >>>> advocating "mulAdd" to be part of "Num" for optimization purposes; and >>>> "fma" reserved for true IEEE754 types and semantics. >>>> >>>> I understand that Edward doesn't like this as this requires a different >>>> class; but really, that's the price to pay if we claim Haskell has proper >>>> support for IEEE754 semantics. (Which I think it should.) The operation is >>>> just different. It also should account for the rounding-modes properly. >>>> >>>> I think we can pull this off just fine; and Haskell can really lead the >>>> pack here. The situation with floats is even worse in other languages. This >>>> is our chance to make a proper implementation, and we have the right tools >>>> to do so. >>>> >>>> -Levent. >>>> >>>> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >>>> >>>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>>> >>>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>>> precision does matter should be part of a different class. What Edward and >>>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>>> but one that goes faster. >>>>> >>>>> No, it looks to me that Edward wants to have a more precise operation >>>>> in Num: >>>>> >>>>> I'd have to make a second copy of the function to even try to see the >>>>> precision win. >>>>> >>>>> Unless I'm wrong, you can't have the following things simultaneously: >>>>> >>>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more >>>>> precise) >>>>> 3. Num operations for Double (addition and multiplication) always >>>>> conform to IEEE754 >>>>> >>>>> The true IEEE754 variants where precision does matter should be part >>>>> of a different class. >>>>> >>>>> So, does it mean that you're fine with not having point #3 because >>>>> people who need it would be able to use a separate class for IEEE754 floats? >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>> >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue May 5 12:27:38 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 5 May 2015 08:27:38 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hrm, now that ive thought about it a wee bit more,perhaps the rounding mode info needs to be attached to ghc threads, otherwise there will be some fun bugs in multithreaded code that uses multiple rounded modes. I'll do some investigation. On May 5, 2015 8:16 AM, "Carter Schonwald" wrote: > To clarify: I think theres a bit of an open design question how the > explicitly moded api would look. I'd suspect it'll look somewhat like Ed's > AD lib, and should be in a userland library I think. > On May 5, 2015 7:40 AM, "Carter Schonwald" > wrote: > >> Hey Levent, >> I actually looked into how to do rounding mode setting a while ago, and >> the conclusion I came to is that those can simply be ffi calls at the top >> level that do a sort of with mode bracketing. Or at least I'm not sure if >> setting the mode in an inner loop is a good idea. >> >> That said, you are making a valid point, and I will investigate to what >> extent compiler support is useful for the latter. If bracketed mode setting >> and unsetting has a small enough performance overhead, adding support in >> ghc primops would be worth while. Note that those primops would have to be >> modeled as doing something thats like io or st, so that when mode switches >> happen can be predictable. Otherwise CSE and related optimizations could >> result in evaluating the same code in the wrong mode. I'll think through >> how that can be avoided, as I do have some ideas. >> >> I suspect mode switching code will wind up using new type wrapped floats >> and doubles that have a phantom index for the mode, and something like >> "runWithModeFoo:: Num a => Mode m->(forall s . Moded s a ) -> a" to >> make sure mode choices happen predictably. That said, there might be a >> better approach that we'll come to after some experimenting >> On May 5, 2015 12:54 AM, "Levent Erkok" wrote: >> >>> Carter: Wall of text is just fine! >>> >>> I'm personally happy to see the results of your experiment. In >>> particular, the better "code-generation" facilities you add around >>> floats/doubles that map to the underlying hardware's native instructions, >>> the better. When we do have proper IEEE floats, we shall surely need all >>> that functionality. >>> >>> While you're working on this, if you can also watch out for how rounding >>> modes can be integrated into the operations, that would be useful as well. >>> I can see at least two designs: >>> >>> * One where the rounding mode goes with the operation: `fpAdd >>> RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the functional >>> solution, but could get quite verbose; and might be costly if the >>> implementation changes the rounding-mode at every issue. >>> >>> * The other is where the operations simply assume the >>> RoundNearestTiesToEven, but we have lifted IO versions that can be modified >>> with a "with" like construct: `withRoundingMode RoundTowardsPositive $ >>> fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* `fpAdd` as before) will have >>> to return some sort of a monadic value (probably in the IO monad) since >>> it'll need to access the rounding mode currently active. >>> >>> Neither choice jumps out at me as the best one; and a hybrid might also >>> be possible. I'd love to hear any insight you gain regarding rounding-modes >>> during your experiment. >>> >>> -Levent. >>> >>> On Mon, May 4, 2015 at 7:54 PM, Carter Schonwald < >>> carter.schonwald at gmail.com> wrote: >>> >>>> pardon the wall of text everyone, but I really want some FMA tooling :) >>>> >>>> I am going to spend some time later this week and next adding FMA >>>> primops to GHC and playing around with different ways to add it to Num >>>> (which seems pretty straightforward, though I think we'd all agree it >>>> shouldn't be exported by Prelude). And then depending on how Yitzchak's >>>> reproposal of that exactly goes (or some iteration thereof) we can get >>>> something useful/usable into 7.12 >>>> >>>> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >>>> numbers*, and a higher precision FMA for *approximate numbers *(*ie >>>> floating point*), and where I cant sanely use FMA if it lives >>>> anywhere but Num unless I rub typeable everywhere and do runtime type >>>> checks for applicable floating point types, which kinda destroys >>>> parametrically in engineering nice things. >>>> >>>> @levent: ghc doesn't do any optimization for floating point arithmetic >>>> (aside from 1-2 very simple things that are possibly questionable), and >>>> until ghc has support for precisly emulating high precision floating point >>>> computation in a portable way, probably wont have any interesting floating >>>> point computation. Mandating that fma a b c === a*b+c for inexact number >>>> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >>>> is conservative about optimizing floating point, because it makes doing >>>> correct stability analyses tractable! I look forward to the day that GHC >>>> gets a bit more sophisticated about optimizing floating point computation, >>>> but that day is still a ways off. >>>> >>>> relatedly: FMA for float and double are not generally going to be >>>> faster than the individual primitive operations, merely more accurate when >>>> used carefully. >>>> >>>> point being*, i'm +1 on adding some manner of FMA operations to Num* >>>> (only sane place to put it where i can actually use it for a general use >>>> library) and i dont really care if we name it fusedMultiplyAdd, >>>> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >>>> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >>>> experts yet easy to google for the curious. >>>> >>>> to repeat: i'm going to do some leg work so that the double and float >>>> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >>>> about that, and they agree to its value, and thats a decision outside of >>>> scope of the libraries purview), and I do hope we can to a consensus about >>>> putting it in Num so that expert library authors can upgrade the guarantees >>>> that they can provide end users without imposing any breaking changes to >>>> end users. >>>> >>>> A number of folks have brought up "but Num is broken" as a counter >>>> argument to adding FMA support to Num. I emphatically agree num is borken >>>> :), BUT! I do also believe that fixing up Num prelude has the burden of >>>> providing a whole cloth design for an alternative design that we can get >>>> broad consensus/adoption with. That will happen by dint of actually >>>> experimentation and usage. >>>> >>>> Point being, adding FMA doesn't further entrench current Num any more >>>> than it already is, it just provides expert library authors with a >>>> transparent way of improving the experience of their users with a free >>>> upgrade in answer accuracy if used carefully. Additionally, when Num's >>>> "semiring ish equational laws" are framed with respect to approximate >>>> forwards/backwards stability, there is a perfectly reasonable law for FMA. >>>> I am happy to spend some time trying to write that up more precisely IFF >>>> that will tilt those in opposition to being in favor. >>>> >>>> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* >>>> as a method therein for Num. If that constitutes a different and *more >>>> palatable proposal* than what people have articulated so far (by >>>> discouraging casual use by dint of hiding) then I am happy to kick off a >>>> new thread with that concrete design choice. >>>> >>>> If theres a counter argument thats a bit more substantive than "Num is >>>> for exact arithmetic" or "Num is wrong" that will sway me to the other >>>> side, i'm all ears, but i'm skeptical of that. >>>> >>>> I emphatically support those who are displeased with Num to prototype >>>> some alternative designs in userland, I do think it'd be great to figure >>>> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >>>> years, but again any such proposal really needs to be realized whole cloth >>>> before it makes its way to being a libraries list proposal. >>>> >>>> >>>> again, pardon the wall of text, i just really want to have nice things >>>> :) >>>> -Carter >>>> >>>> >>>> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: >>>> >>>>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>>>> Double/Float. It should only be an "optmization" (as in modular >>>>> arithmetic), not a semantic changing operation. Thus justifying the >>>>> optimization. >>>>> >>>>> "fma" should be the "more-precise" version available for Float/Double. >>>>> I don't think it makes sense to have "fma" for other types. That's why I'm >>>>> advocating "mulAdd" to be part of "Num" for optimization purposes; and >>>>> "fma" reserved for true IEEE754 types and semantics. >>>>> >>>>> I understand that Edward doesn't like this as this requires a >>>>> different class; but really, that's the price to pay if we claim Haskell >>>>> has proper support for IEEE754 semantics. (Which I think it should.) The >>>>> operation is just different. It also should account for the rounding-modes >>>>> properly. >>>>> >>>>> I think we can pull this off just fine; and Haskell can really lead >>>>> the pack here. The situation with floats is even worse in other languages. >>>>> This is our chance to make a proper implementation, and we have the right >>>>> tools to do so. >>>>> >>>>> -Levent. >>>>> >>>>> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >>>>> >>>>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>>>> >>>>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>>>> precision does matter should be part of a different class. What Edward and >>>>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>>>> but one that goes faster. >>>>>> >>>>>> No, it looks to me that Edward wants to have a more precise operation >>>>>> in Num: >>>>>> >>>>>> I'd have to make a second copy of the function to even try to see the >>>>>> precision win. >>>>>> >>>>>> Unless I'm wrong, you can't have the following things simultaneously: >>>>>> >>>>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is >>>>>> more precise) >>>>>> 3. Num operations for Double (addition and multiplication) always >>>>>> conform to IEEE754 >>>>>> >>>>>> The true IEEE754 variants where precision does matter should be >>>>>> part of a different class. >>>>>> >>>>>> So, does it mean that you're fine with not having point #3 because >>>>>> people who need it would be able to use a separate class for IEEE754 floats? >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Libraries mailing list >>>>> Libraries at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>>> >>>>> >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Tue May 5 13:06:27 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Tue, 5 May 2015 22:06:27 +0900 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hi, Related informatioln. Intel FMA's information(hardware dependent) is here: Chapter 11 Intel 64 and IA-32 Architectures Optimization Reference Manual http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf Of course, it is information that depends on the particular processor. And abstraction level is too low. PS I like Haskell's abstruct naming convention more than "fma":-) Regards, Takenobu 2015-05-05 11:54 GMT+09:00 Carter Schonwald : > pardon the wall of text everyone, but I really want some FMA tooling :) > > I am going to spend some time later this week and next adding FMA primops > to GHC and playing around with different ways to add it to Num (which seems > pretty straightforward, though I think we'd all agree it shouldn't be > exported by Prelude). And then depending on how Yitzchak's reproposal of > that exactly goes (or some iteration thereof) we can get something > useful/usable into 7.12 > > i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact > numbers*, and a higher precision FMA for *approximate numbers *(*ie > floating point*), and where I cant sanely use FMA if it lives anywhere > but Num unless I rub typeable everywhere and do runtime type checks for > applicable floating point types, which kinda destroys parametrically in > engineering nice things. > > @levent: ghc doesn't do any optimization for floating point arithmetic > (aside from 1-2 very simple things that are possibly questionable), and > until ghc has support for precisly emulating high precision floating point > computation in a portable way, probably wont have any interesting floating > point computation. Mandating that fma a b c === a*b+c for inexact number > datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc > is conservative about optimizing floating point, because it makes doing > correct stability analyses tractable! I look forward to the day that GHC > gets a bit more sophisticated about optimizing floating point computation, > but that day is still a ways off. > > relatedly: FMA for float and double are not generally going to be faster > than the individual primitive operations, merely more accurate when used > carefully. > > point being*, i'm +1 on adding some manner of FMA operations to Num* > (only sane place to put it where i can actually use it for a general use > library) and i dont really care if we name it fusedMultiplyAdd, > multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor > "fusedMultiplyAdd" if we want a descriptive name that will be familiar to > experts yet easy to google for the curious. > > to repeat: i'm going to do some leg work so that the double and float > prims are portably exposed by ghc-prims (i've spoken with several ghc devs > about that, and they agree to its value, and thats a decision outside of > scope of the libraries purview), and I do hope we can to a consensus about > putting it in Num so that expert library authors can upgrade the guarantees > that they can provide end users without imposing any breaking changes to > end users. > > A number of folks have brought up "but Num is broken" as a counter > argument to adding FMA support to Num. I emphatically agree num is borken > :), BUT! I do also believe that fixing up Num prelude has the burden of > providing a whole cloth design for an alternative design that we can get > broad consensus/adoption with. That will happen by dint of actually > experimentation and usage. > > Point being, adding FMA doesn't further entrench current Num any more than > it already is, it just provides expert library authors with a transparent > way of improving the experience of their users with a free upgrade in > answer accuracy if used carefully. Additionally, when Num's "semiring ish > equational laws" are framed with respect to approximate forwards/backwards > stability, there is a perfectly reasonable law for FMA. I am happy to spend > some time trying to write that up more precisely IFF that will tilt those > in opposition to being in favor. > > I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as a > method therein for Num. If that constitutes a different and *more > palatable proposal* than what people have articulated so far (by > discouraging casual use by dint of hiding) then I am happy to kick off a > new thread with that concrete design choice. > > If theres a counter argument thats a bit more substantive than "Num is for > exact arithmetic" or "Num is wrong" that will sway me to the other side, > i'm all ears, but i'm skeptical of that. > > I emphatically support those who are displeased with Num to prototype some > alternative designs in userland, I do think it'd be great to figure out a > new Num prelude we can migrate Haskell / GHC to over the next 2-5 years, > but again any such proposal really needs to be realized whole cloth before > it makes its way to being a libraries list proposal. > > > again, pardon the wall of text, i just really want to have nice things :) > -Carter > > > On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: > >> I think `mulAdd a b c` should be implemented as `a*b+c` even for >> Double/Float. It should only be an "optmization" (as in modular >> arithmetic), not a semantic changing operation. Thus justifying the >> optimization. >> >> "fma" should be the "more-precise" version available for Float/Double. I >> don't think it makes sense to have "fma" for other types. That's why I'm >> advocating "mulAdd" to be part of "Num" for optimization purposes; and >> "fma" reserved for true IEEE754 types and semantics. >> >> I understand that Edward doesn't like this as this requires a different >> class; but really, that's the price to pay if we claim Haskell has proper >> support for IEEE754 semantics. (Which I think it should.) The operation is >> just different. It also should account for the rounding-modes properly. >> >> I think we can pull this off just fine; and Haskell can really lead the >> pack here. The situation with floats is even worse in other languages. This >> is our chance to make a proper implementation, and we have the right tools >> to do so. >> >> -Levent. >> >> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >> >>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>> >>> Artyom: That's precisely the point. The true IEEE754 variants where >>> precision does matter should be part of a different class. What Edward and >>> Yitz want is an "optimized" multiply-add where the semantics is the same >>> but one that goes faster. >>> >>> No, it looks to me that Edward wants to have a more precise operation in >>> Num: >>> >>> I'd have to make a second copy of the function to even try to see the >>> precision win. >>> >>> Unless I'm wrong, you can't have the following things simultaneously: >>> >>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more >>> precise) >>> 3. Num operations for Double (addition and multiplication) always >>> conform to IEEE754 >>> >>> The true IEEE754 variants where precision does matter should be part >>> of a different class. >>> >>> So, does it mean that you're fine with not having point #3 because >>> people who need it would be able to use a separate class for IEEE754 floats? >>> >>> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Tue May 5 13:36:35 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Tue, 5 May 2015 22:36:35 +0900 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hi, Is this useful? BLAS (Basic Linear Algebra Subprograms) http://www.netlib.org/blas/ http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms Regards, Takenobu 2015-05-05 22:06 GMT+09:00 Takenobu Tani : > Hi, > > Related informatioln. > > Intel FMA's information(hardware dependent) is here: > > Chapter 11 > > Intel 64 and IA-32 Architectures Optimization Reference Manual > > http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf > > > Of course, it is information that depends on the particular processor. > And abstraction level is too low. > > PS > I like Haskell's abstruct naming convention more than "fma":-) > > Regards, > Takenobu > > > > 2015-05-05 11:54 GMT+09:00 Carter Schonwald : > >> pardon the wall of text everyone, but I really want some FMA tooling :) >> >> I am going to spend some time later this week and next adding FMA primops >> to GHC and playing around with different ways to add it to Num (which seems >> pretty straightforward, though I think we'd all agree it shouldn't be >> exported by Prelude). And then depending on how Yitzchak's reproposal of >> that exactly goes (or some iteration thereof) we can get something >> useful/usable into 7.12 >> >> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >> numbers*, and a higher precision FMA for *approximate numbers *(*ie >> floating point*), and where I cant sanely use FMA if it lives anywhere >> but Num unless I rub typeable everywhere and do runtime type checks for >> applicable floating point types, which kinda destroys parametrically in >> engineering nice things. >> >> @levent: ghc doesn't do any optimization for floating point arithmetic >> (aside from 1-2 very simple things that are possibly questionable), and >> until ghc has support for precisly emulating high precision floating point >> computation in a portable way, probably wont have any interesting floating >> point computation. Mandating that fma a b c === a*b+c for inexact number >> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >> is conservative about optimizing floating point, because it makes doing >> correct stability analyses tractable! I look forward to the day that GHC >> gets a bit more sophisticated about optimizing floating point computation, >> but that day is still a ways off. >> >> relatedly: FMA for float and double are not generally going to be faster >> than the individual primitive operations, merely more accurate when used >> carefully. >> >> point being*, i'm +1 on adding some manner of FMA operations to Num* >> (only sane place to put it where i can actually use it for a general use >> library) and i dont really care if we name it fusedMultiplyAdd, >> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >> experts yet easy to google for the curious. >> >> to repeat: i'm going to do some leg work so that the double and float >> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >> about that, and they agree to its value, and thats a decision outside of >> scope of the libraries purview), and I do hope we can to a consensus about >> putting it in Num so that expert library authors can upgrade the guarantees >> that they can provide end users without imposing any breaking changes to >> end users. >> >> A number of folks have brought up "but Num is broken" as a counter >> argument to adding FMA support to Num. I emphatically agree num is borken >> :), BUT! I do also believe that fixing up Num prelude has the burden of >> providing a whole cloth design for an alternative design that we can get >> broad consensus/adoption with. That will happen by dint of actually >> experimentation and usage. >> >> Point being, adding FMA doesn't further entrench current Num any more >> than it already is, it just provides expert library authors with a >> transparent way of improving the experience of their users with a free >> upgrade in answer accuracy if used carefully. Additionally, when Num's >> "semiring ish equational laws" are framed with respect to approximate >> forwards/backwards stability, there is a perfectly reasonable law for FMA. >> I am happy to spend some time trying to write that up more precisely IFF >> that will tilt those in opposition to being in favor. >> >> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as >> a method therein for Num. If that constitutes a different and *more >> palatable proposal* than what people have articulated so far (by >> discouraging casual use by dint of hiding) then I am happy to kick off a >> new thread with that concrete design choice. >> >> If theres a counter argument thats a bit more substantive than "Num is >> for exact arithmetic" or "Num is wrong" that will sway me to the other >> side, i'm all ears, but i'm skeptical of that. >> >> I emphatically support those who are displeased with Num to prototype >> some alternative designs in userland, I do think it'd be great to figure >> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >> years, but again any such proposal really needs to be realized whole cloth >> before it makes its way to being a libraries list proposal. >> >> >> again, pardon the wall of text, i just really want to have nice things :) >> -Carter >> >> >> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: >> >>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>> Double/Float. It should only be an "optmization" (as in modular >>> arithmetic), not a semantic changing operation. Thus justifying the >>> optimization. >>> >>> "fma" should be the "more-precise" version available for Float/Double. I >>> don't think it makes sense to have "fma" for other types. That's why I'm >>> advocating "mulAdd" to be part of "Num" for optimization purposes; and >>> "fma" reserved for true IEEE754 types and semantics. >>> >>> I understand that Edward doesn't like this as this requires a different >>> class; but really, that's the price to pay if we claim Haskell has proper >>> support for IEEE754 semantics. (Which I think it should.) The operation is >>> just different. It also should account for the rounding-modes properly. >>> >>> I think we can pull this off just fine; and Haskell can really lead the >>> pack here. The situation with floats is even worse in other languages. This >>> is our chance to make a proper implementation, and we have the right tools >>> to do so. >>> >>> -Levent. >>> >>> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >>> >>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>> >>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>> precision does matter should be part of a different class. What Edward and >>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>> but one that goes faster. >>>> >>>> No, it looks to me that Edward wants to have a more precise operation >>>> in Num: >>>> >>>> I'd have to make a second copy of the function to even try to see the >>>> precision win. >>>> >>>> Unless I'm wrong, you can't have the following things simultaneously: >>>> >>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more >>>> precise) >>>> 3. Num operations for Double (addition and multiplication) always >>>> conform to IEEE754 >>>> >>>> The true IEEE754 variants where precision does matter should be part >>>> of a different class. >>>> >>>> So, does it mean that you're fine with not having point #3 because >>>> people who need it would be able to use a separate class for IEEE754 floats? >>>> >>>> >>> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >>> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue May 5 13:52:12 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 5 May 2015 09:52:12 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hey Takenobu, Yes both are super useful! I've certainly used the Intel architecture manual a few times and I wrote/maintain (in my biased opinion ) one of the nicer blas ffi bindings on hackage. It's worth mentioning that for haskellers who are interested in either mathematical computation or performance engineering, on freenode the #numerical-haskell channel is pretty good. Though again I'm a bit biased about the nice community there On Tuesday, May 5, 2015, Takenobu Tani wrote: > Hi, > > Is this useful? > > BLAS (Basic Linear Algebra Subprograms) > http://www.netlib.org/blas/ > http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms > > Regards, > Takenobu > > > > 2015-05-05 22:06 GMT+09:00 Takenobu Tani >: > >> Hi, >> >> Related informatioln. >> >> Intel FMA's information(hardware dependent) is here: >> >> Chapter 11 >> >> Intel 64 and IA-32 Architectures Optimization Reference Manual >> >> http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf >> >> >> Of course, it is information that depends on the particular processor. >> And abstraction level is too low. >> >> PS >> I like Haskell's abstruct naming convention more than "fma":-) >> >> Regards, >> Takenobu >> >> >> >> 2015-05-05 11:54 GMT+09:00 Carter Schonwald > >: >> >>> pardon the wall of text everyone, but I really want some FMA tooling :) >>> >>> I am going to spend some time later this week and next adding FMA >>> primops to GHC and playing around with different ways to add it to Num >>> (which seems pretty straightforward, though I think we'd all agree it >>> shouldn't be exported by Prelude). And then depending on how Yitzchak's >>> reproposal of that exactly goes (or some iteration thereof) we can get >>> something useful/usable into 7.12 >>> >>> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >>> numbers*, and a higher precision FMA for *approximate numbers *(*ie >>> floating point*), and where I cant sanely use FMA if it lives anywhere >>> but Num unless I rub typeable everywhere and do runtime type checks for >>> applicable floating point types, which kinda destroys parametrically in >>> engineering nice things. >>> >>> @levent: ghc doesn't do any optimization for floating point arithmetic >>> (aside from 1-2 very simple things that are possibly questionable), and >>> until ghc has support for precisly emulating high precision floating point >>> computation in a portable way, probably wont have any interesting floating >>> point computation. Mandating that fma a b c === a*b+c for inexact number >>> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >>> is conservative about optimizing floating point, because it makes doing >>> correct stability analyses tractable! I look forward to the day that GHC >>> gets a bit more sophisticated about optimizing floating point computation, >>> but that day is still a ways off. >>> >>> relatedly: FMA for float and double are not generally going to be faster >>> than the individual primitive operations, merely more accurate when used >>> carefully. >>> >>> point being*, i'm +1 on adding some manner of FMA operations to Num* >>> (only sane place to put it where i can actually use it for a general use >>> library) and i dont really care if we name it fusedMultiplyAdd, >>> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >>> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >>> experts yet easy to google for the curious. >>> >>> to repeat: i'm going to do some leg work so that the double and float >>> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >>> about that, and they agree to its value, and thats a decision outside of >>> scope of the libraries purview), and I do hope we can to a consensus about >>> putting it in Num so that expert library authors can upgrade the guarantees >>> that they can provide end users without imposing any breaking changes to >>> end users. >>> >>> A number of folks have brought up "but Num is broken" as a counter >>> argument to adding FMA support to Num. I emphatically agree num is borken >>> :), BUT! I do also believe that fixing up Num prelude has the burden of >>> providing a whole cloth design for an alternative design that we can get >>> broad consensus/adoption with. That will happen by dint of actually >>> experimentation and usage. >>> >>> Point being, adding FMA doesn't further entrench current Num any more >>> than it already is, it just provides expert library authors with a >>> transparent way of improving the experience of their users with a free >>> upgrade in answer accuracy if used carefully. Additionally, when Num's >>> "semiring ish equational laws" are framed with respect to approximate >>> forwards/backwards stability, there is a perfectly reasonable law for FMA. >>> I am happy to spend some time trying to write that up more precisely IFF >>> that will tilt those in opposition to being in favor. >>> >>> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* as >>> a method therein for Num. If that constitutes a different and *more >>> palatable proposal* than what people have articulated so far (by >>> discouraging casual use by dint of hiding) then I am happy to kick off a >>> new thread with that concrete design choice. >>> >>> If theres a counter argument thats a bit more substantive than "Num is >>> for exact arithmetic" or "Num is wrong" that will sway me to the other >>> side, i'm all ears, but i'm skeptical of that. >>> >>> I emphatically support those who are displeased with Num to prototype >>> some alternative designs in userland, I do think it'd be great to figure >>> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >>> years, but again any such proposal really needs to be realized whole cloth >>> before it makes its way to being a libraries list proposal. >>> >>> >>> again, pardon the wall of text, i just really want to have nice things >>> :) >>> -Carter >>> >>> >>> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok >> > wrote: >>> >>>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>>> Double/Float. It should only be an "optmization" (as in modular >>>> arithmetic), not a semantic changing operation. Thus justifying the >>>> optimization. >>>> >>>> "fma" should be the "more-precise" version available for Float/Double. >>>> I don't think it makes sense to have "fma" for other types. That's why I'm >>>> advocating "mulAdd" to be part of "Num" for optimization purposes; and >>>> "fma" reserved for true IEEE754 types and semantics. >>>> >>>> I understand that Edward doesn't like this as this requires a different >>>> class; but really, that's the price to pay if we claim Haskell has proper >>>> support for IEEE754 semantics. (Which I think it should.) The operation is >>>> just different. It also should account for the rounding-modes properly. >>>> >>>> I think we can pull this off just fine; and Haskell can really lead the >>>> pack here. The situation with floats is even worse in other languages. This >>>> is our chance to make a proper implementation, and we have the right tools >>>> to do so. >>>> >>>> -Levent. >>>> >>>> On Mon, May 4, 2015 at 10:58 AM, Artyom >>> > wrote: >>>> >>>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>>> >>>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>>> precision does matter should be part of a different class. What Edward and >>>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>>> but one that goes faster. >>>>> >>>>> No, it looks to me that Edward wants to have a more precise operation >>>>> in Num: >>>>> >>>>> I'd have to make a second copy of the function to even try to see the >>>>> precision win. >>>>> >>>>> Unless I'm wrong, you can't have the following things simultaneously: >>>>> >>>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is more >>>>> precise) >>>>> 3. Num operations for Double (addition and multiplication) always >>>>> conform to IEEE754 >>>>> >>>>> The true IEEE754 variants where precision does matter should be part >>>>> of a different class. >>>>> >>>>> So, does it mean that you're fine with not having point #3 because >>>>> people who need it would be able to use a separate class for IEEE754 floats? >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>> >>>> >>> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmaessen at alum.mit.edu Tue May 5 14:23:26 2015 From: jmaessen at alum.mit.edu (Jan-Willem Maessen) Date: Tue, 5 May 2015 10:23:26 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: On Tue, May 5, 2015 at 8:16 AM, Carter Schonwald wrote: > To clarify: I think theres a bit of an open design question how the > explicitly moded api would look. I'd suspect it'll look somewhat like Ed's > AD lib, and should be in a userland library I think. > Another concern here is laziness. What happens when you force a thunk of type Double inside a "withRoundingMode" kind of construct? -Jan > On May 5, 2015 7:40 AM, "Carter Schonwald" > wrote: > >> Hey Levent, >> I actually looked into how to do rounding mode setting a while ago, and >> the conclusion I came to is that those can simply be ffi calls at the top >> level that do a sort of with mode bracketing. Or at least I'm not sure if >> setting the mode in an inner loop is a good idea. >> >> That said, you are making a valid point, and I will investigate to what >> extent compiler support is useful for the latter. If bracketed mode setting >> and unsetting has a small enough performance overhead, adding support in >> ghc primops would be worth while. Note that those primops would have to be >> modeled as doing something thats like io or st, so that when mode switches >> happen can be predictable. Otherwise CSE and related optimizations could >> result in evaluating the same code in the wrong mode. I'll think through >> how that can be avoided, as I do have some ideas. >> >> I suspect mode switching code will wind up using new type wrapped floats >> and doubles that have a phantom index for the mode, and something like >> "runWithModeFoo:: Num a => Mode m->(forall s . Moded s a ) -> a" to >> make sure mode choices happen predictably. That said, there might be a >> better approach that we'll come to after some experimenting >> On May 5, 2015 12:54 AM, "Levent Erkok" wrote: >> >>> Carter: Wall of text is just fine! >>> >>> I'm personally happy to see the results of your experiment. In >>> particular, the better "code-generation" facilities you add around >>> floats/doubles that map to the underlying hardware's native instructions, >>> the better. When we do have proper IEEE floats, we shall surely need all >>> that functionality. >>> >>> While you're working on this, if you can also watch out for how rounding >>> modes can be integrated into the operations, that would be useful as well. >>> I can see at least two designs: >>> >>> * One where the rounding mode goes with the operation: `fpAdd >>> RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the functional >>> solution, but could get quite verbose; and might be costly if the >>> implementation changes the rounding-mode at every issue. >>> >>> * The other is where the operations simply assume the >>> RoundNearestTiesToEven, but we have lifted IO versions that can be modified >>> with a "with" like construct: `withRoundingMode RoundTowardsPositive $ >>> fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* `fpAdd` as before) will have >>> to return some sort of a monadic value (probably in the IO monad) since >>> it'll need to access the rounding mode currently active. >>> >>> Neither choice jumps out at me as the best one; and a hybrid might also >>> be possible. I'd love to hear any insight you gain regarding rounding-modes >>> during your experiment. >>> >>> -Levent. >>> >>> On Mon, May 4, 2015 at 7:54 PM, Carter Schonwald < >>> carter.schonwald at gmail.com> wrote: >>> >>>> pardon the wall of text everyone, but I really want some FMA tooling :) >>>> >>>> I am going to spend some time later this week and next adding FMA >>>> primops to GHC and playing around with different ways to add it to Num >>>> (which seems pretty straightforward, though I think we'd all agree it >>>> shouldn't be exported by Prelude). And then depending on how Yitzchak's >>>> reproposal of that exactly goes (or some iteration thereof) we can get >>>> something useful/usable into 7.12 >>>> >>>> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >>>> numbers*, and a higher precision FMA for *approximate numbers *(*ie >>>> floating point*), and where I cant sanely use FMA if it lives >>>> anywhere but Num unless I rub typeable everywhere and do runtime type >>>> checks for applicable floating point types, which kinda destroys >>>> parametrically in engineering nice things. >>>> >>>> @levent: ghc doesn't do any optimization for floating point arithmetic >>>> (aside from 1-2 very simple things that are possibly questionable), and >>>> until ghc has support for precisly emulating high precision floating point >>>> computation in a portable way, probably wont have any interesting floating >>>> point computation. Mandating that fma a b c === a*b+c for inexact number >>>> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >>>> is conservative about optimizing floating point, because it makes doing >>>> correct stability analyses tractable! I look forward to the day that GHC >>>> gets a bit more sophisticated about optimizing floating point computation, >>>> but that day is still a ways off. >>>> >>>> relatedly: FMA for float and double are not generally going to be >>>> faster than the individual primitive operations, merely more accurate when >>>> used carefully. >>>> >>>> point being*, i'm +1 on adding some manner of FMA operations to Num* >>>> (only sane place to put it where i can actually use it for a general use >>>> library) and i dont really care if we name it fusedMultiplyAdd, >>>> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >>>> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >>>> experts yet easy to google for the curious. >>>> >>>> to repeat: i'm going to do some leg work so that the double and float >>>> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >>>> about that, and they agree to its value, and thats a decision outside of >>>> scope of the libraries purview), and I do hope we can to a consensus about >>>> putting it in Num so that expert library authors can upgrade the guarantees >>>> that they can provide end users without imposing any breaking changes to >>>> end users. >>>> >>>> A number of folks have brought up "but Num is broken" as a counter >>>> argument to adding FMA support to Num. I emphatically agree num is borken >>>> :), BUT! I do also believe that fixing up Num prelude has the burden of >>>> providing a whole cloth design for an alternative design that we can get >>>> broad consensus/adoption with. That will happen by dint of actually >>>> experimentation and usage. >>>> >>>> Point being, adding FMA doesn't further entrench current Num any more >>>> than it already is, it just provides expert library authors with a >>>> transparent way of improving the experience of their users with a free >>>> upgrade in answer accuracy if used carefully. Additionally, when Num's >>>> "semiring ish equational laws" are framed with respect to approximate >>>> forwards/backwards stability, there is a perfectly reasonable law for FMA. >>>> I am happy to spend some time trying to write that up more precisely IFF >>>> that will tilt those in opposition to being in favor. >>>> >>>> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* >>>> as a method therein for Num. If that constitutes a different and *more >>>> palatable proposal* than what people have articulated so far (by >>>> discouraging casual use by dint of hiding) then I am happy to kick off a >>>> new thread with that concrete design choice. >>>> >>>> If theres a counter argument thats a bit more substantive than "Num is >>>> for exact arithmetic" or "Num is wrong" that will sway me to the other >>>> side, i'm all ears, but i'm skeptical of that. >>>> >>>> I emphatically support those who are displeased with Num to prototype >>>> some alternative designs in userland, I do think it'd be great to figure >>>> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >>>> years, but again any such proposal really needs to be realized whole cloth >>>> before it makes its way to being a libraries list proposal. >>>> >>>> >>>> again, pardon the wall of text, i just really want to have nice things >>>> :) >>>> -Carter >>>> >>>> >>>> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: >>>> >>>>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>>>> Double/Float. It should only be an "optmization" (as in modular >>>>> arithmetic), not a semantic changing operation. Thus justifying the >>>>> optimization. >>>>> >>>>> "fma" should be the "more-precise" version available for Float/Double. >>>>> I don't think it makes sense to have "fma" for other types. That's why I'm >>>>> advocating "mulAdd" to be part of "Num" for optimization purposes; and >>>>> "fma" reserved for true IEEE754 types and semantics. >>>>> >>>>> I understand that Edward doesn't like this as this requires a >>>>> different class; but really, that's the price to pay if we claim Haskell >>>>> has proper support for IEEE754 semantics. (Which I think it should.) The >>>>> operation is just different. It also should account for the rounding-modes >>>>> properly. >>>>> >>>>> I think we can pull this off just fine; and Haskell can really lead >>>>> the pack here. The situation with floats is even worse in other languages. >>>>> This is our chance to make a proper implementation, and we have the right >>>>> tools to do so. >>>>> >>>>> -Levent. >>>>> >>>>> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >>>>> >>>>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>>>> >>>>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>>>> precision does matter should be part of a different class. What Edward and >>>>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>>>> but one that goes faster. >>>>>> >>>>>> No, it looks to me that Edward wants to have a more precise operation >>>>>> in Num: >>>>>> >>>>>> I'd have to make a second copy of the function to even try to see the >>>>>> precision win. >>>>>> >>>>>> Unless I'm wrong, you can't have the following things simultaneously: >>>>>> >>>>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is >>>>>> more precise) >>>>>> 3. Num operations for Double (addition and multiplication) always >>>>>> conform to IEEE754 >>>>>> >>>>>> The true IEEE754 variants where precision does matter should be >>>>>> part of a different class. >>>>>> >>>>>> So, does it mean that you're fine with not having point #3 because >>>>>> people who need it would be able to use a separate class for IEEE754 floats? >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Libraries mailing list >>>>> Libraries at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>>> >>>>> >>>> >>> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Tue May 5 14:50:57 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 05 May 2015 07:50:57 -0700 Subject: Template Haskell changes to names and package keys In-Reply-To: <50ae1abd4ebf4c54ad679f4a5b74e7ca@DB4PR30MB030.064d.mgd.msft.net> References: <1430500118-sup-420@sabre> <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> <50ae1abd4ebf4c54ad679f4a5b74e7ca@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <1430837344-sup-3322@sabre> Excerpts from Simon Peyton Jones's message of 2015-05-05 01:15:50 -0700: > Very good exercise! Looking for how an existing API is used (perhaps in a clumsy way, because of the inadequacies of the existing API) is a good guide to improving it. > > e.g. If tuple names are an issue, let?s provide a TH API for getting their names!! Hello Simon, The right and proper way of getting a tuple name (well, constructor really, but that's the only reason people want names) should be: [| (,) |] But people sometimes don't want to use quotes, e.g. as in https://github.com/ekmett/lens/issues/496 where they want to work with stage1 GHC. So in this case, https://ghc.haskell.org/trac/ghc/ticket/10382 (making quotes work with stage1 GHC) will help a lot. Edward From ekmett at gmail.com Tue May 5 14:51:41 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 5 May 2015 10:51:41 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: <5548A801.2070205@pkturner.org> References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> <5548A801.2070205@pkturner.org> Message-ID: On Tue, May 5, 2015 at 7:22 AM, Scott Turner <2haskell at pkturner.org> wrote: > On 2015-05-05 00:54, Levent Erkok wrote: > > I can see at least two designs: > > > > * One where the rounding mode goes with the operation: `fpAdd > > RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the > > functional solution, but could get quite verbose; and might be costly > > if the implementation changes the rounding-mode at every issue. > > > > * The other is where the operations simply assume the > > RoundNearestTiesToEven, but we have lifted IO versions that can be > > modified with a "with" like construct: `withRoundingMode > > RoundTowardsPositive $ fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* > > `fpAdd` as before) will have to return some sort of a monadic value > > (probably in the IO monad) since it'll need to access the rounding > > mode currently active. > > > > Neither choice jumps out at me as the best one; and a hybrid might > > also be possible. I'd love to hear any insight you gain regarding > > rounding-modes during your experiment. > > The monadic alternative is more readily extensible to handle IEEE 754's > sticky flags: inexact, overflow, underflow, divide-by-zero, and invalid. > This gets messier than you'd think. Keep in mind we switch contexts within our own green threads constantly on shared system threads / capabilities so the current rounding mode, sticky flags, etc. would become something you'd have to hold per Thread, and then change proactively as threads migrate between CPUs / capabilities, which we're basically completely unaware of right now. This was what I learned when I tried my own hand at it and failed: http://hackage.haskell.org/package/rounding There found I gave up, and moved setting the rounding mode into custom primitives themselves. But even then you find other problems! The libm versions of almost every combinator doesn't just give slightly wrong answers when you switch rounding modes, it gives _completely_ wrong answers when you switch rounding modes. cos basically starts looking like a random number generator. This is rather amusing given that libm is the library that specified how to change the damn rounding mode and fixing this fact it was blocked by Ulrich Drepper when I last looked. Workarounds such as using crlibm exist, but isn't installed on most platforms and it would rather dramatically complicate the installation of ghc to incur the dependency. This is why I've switched to using MPFR for anything with known rounding modes and just paying a pretty big performance tax for correctness. (That and I'm working to release a library that does exact real arithmetic using trees of nested linear fractional transformations -- assuming I can figure out how to keep performance high enough.) -Edward -Edward > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue May 5 15:07:43 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 5 May 2015 11:07:43 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> <5548A801.2070205@pkturner.org> Message-ID: Irk. If libm is busted when changing rounding modes, that puts a nasty twist on things. I do agree that even if that hurdle is jumped, setting the local rounding mode will have to be part of every green thread context switch. But if libm is hosed that kinda makes adding that machinery a smudge pointless until there's a good story for that. On Tuesday, May 5, 2015, Edward Kmett wrote: > On Tue, May 5, 2015 at 7:22 AM, Scott Turner <2haskell at pkturner.org > > wrote: > >> On 2015-05-05 00:54, Levent Erkok wrote: >> > I can see at least two designs: >> > >> > * One where the rounding mode goes with the operation: `fpAdd >> > RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the >> > functional solution, but could get quite verbose; and might be costly >> > if the implementation changes the rounding-mode at every issue. >> > >> > * The other is where the operations simply assume the >> > RoundNearestTiesToEven, but we have lifted IO versions that can be >> > modified with a "with" like construct: `withRoundingMode >> > RoundTowardsPositive $ fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* >> > `fpAdd` as before) will have to return some sort of a monadic value >> > (probably in the IO monad) since it'll need to access the rounding >> > mode currently active. >> > >> > Neither choice jumps out at me as the best one; and a hybrid might >> > also be possible. I'd love to hear any insight you gain regarding >> > rounding-modes during your experiment. >> >> The monadic alternative is more readily extensible to handle IEEE 754's >> sticky flags: inexact, overflow, underflow, divide-by-zero, and invalid. >> > > This gets messier than you'd think. Keep in mind we switch contexts within > our own green threads constantly on shared system threads / capabilities so > the current rounding mode, sticky flags, etc. would become something you'd > have to hold per Thread, and then change proactively as threads migrate > between CPUs / capabilities, which we're basically completely unaware of > right now. > > This was what I learned when I tried my own hand at it and failed: > > http://hackage.haskell.org/package/rounding > > There found I gave up, and moved setting the rounding mode into custom > primitives themselves. But even then you find other problems! The libm > versions of almost every combinator doesn't just give slightly wrong > answers when you switch rounding modes, it gives _completely_ wrong answers > when you switch rounding modes. cos basically starts looking like a random > number generator. This is rather amusing given that libm is the library > that specified how to change the damn rounding mode and fixing this fact it > was blocked by Ulrich Drepper when I last looked. > > Workarounds such as using crlibm > exist, but isn't installed on most platforms and it would rather > dramatically complicate the installation of ghc to incur the dependency. > > This is why I've switched to using MPFR for anything with known rounding > modes and just paying a pretty big performance tax for correctness. (That > and I'm working to release a library that does exact real arithmetic using > trees of nested linear fractional transformations -- assuming I can figure > out how to keep performance high enough.) > > -Edward > > > > -Edward > > >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregweber.info Tue May 5 15:27:37 2015 From: greg at gregweber.info (Greg Weber) Date: Tue, 5 May 2015 08:27:37 -0700 Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: On Mon, May 4, 2015 at 11:37 AM, Reid Barton wrote: > On Mon, May 4, 2015 at 12:58 PM, David Feuer > wrote: > >> Wouldn't your concerns about NonEmpty be addressed by keeping its type >> abstract? Then something like Liquid Haskell could be used to define it >> better. >> > There are (at least) two possible designs for a "non-empty list type". > > 1. A refinement type (as in LiquidHaskell) of [t] whose values include > only the non-empty lists. Call it NonEmptyLiquid t. You can pass a > NonEmptyLiquid t to a function that expects a [t], and you can pass a [t] > to a function that expects a NonEmptyLiquid [t] if the compiler can prove > that your [t] is nonempty. If it can't then you can add a runtime test for > emptiness and in the non-empty case, the compiler will know the list is > non-empty. > > 2. A new type NonEmptySolid t that is totally unrelated to [t] like you > can define in Haskell today. The advantage is that NonEmptySolid is a > full-fledged type constructor that can have instances, be passed to other > type constructors and so on. The disadvantage is that you need to > explicitly convert in your program (and possibly do a runtime conversion > also) in either direction between [t] and NonEmptySolid t. > > I think most people who want a "non-empty list type" want 1, not 2. Option > 2 is bad for API design because it forces the users of your library to care > exactly as much as you do about non-emptiness. If you imagine adding other > sorts of lists like infinite lists, fixed-length or bounded-length lists, > lists of even length etc. then it quickly becomes clear that having such an > array of incompatible list types is not the way to go. We just want lists > to be lists, but we also want the compiler to make sure we don't try to > take the head of an empty list. > I don't think it is bad API design to have the data types explain the properties of the input. Quite the opposite. In the case of head/tail the user *must* care the same about emptiness as the library: that difference makes it common place to ban usage of those functions. I agree with Henning that I don't suffer from awkwardness with using NonEmpty: I just use `toList` on occasion. The issue you have not raised that I am more concerned with is that NonEmpty only works for lists. Michael and I figured out how to extend the concept to any (Mono)Foldable structure and also to be able to demand lengths of > 1. I still found that directly using NonEmpty is useful just as directly using a Haskell list is still useful. https://github.com/snoyberg/mono-traversable#minlen http://hackage.haskell.org/package/mono-traversable-0.9.1/docs/Data-MinLen.html > Those who use a NonEmpty type prefer option 2 over option 0 "type > NonEmptyGas t = [t] -- and just hope"; but that doesn't mean they prefer > option 2 over option 1. Those who really want option 2 can also define it > as a newtype wrapper on option 1, as you noted. > > So, to answer your question, no, it wouldn't really make a difference if > the NonEmpty type was abstract. That would just smooth the transition to a > design that I think people don't really want. > > Finally, let me reiterate that there seem to be no advantages to moving a > NonEmpty type into base rather than into its own small package. We don't > need base to swallow up every small popular package. > I agree that this point should be debated more. That should happen separately from the refinement vs. NonEmpty debate. If it is a separate package then there is no debate about refinements to have anyways. I think it would be good to require a justification for putting anything into base. > > Regards, > Reid Barton > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Tue May 5 15:56:18 2015 From: erkokl at gmail.com (Levent Erkok) Date: Tue, 5 May 2015 08:56:18 -0700 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> <5548A801.2070205@pkturner.org> Message-ID: Hmm, minefield ahead.. But surely there must be a "correct" compromise? (Even with a huge performance penalty.) I'll just add that rwbarton had this comment earlier: "Be aware (if you aren't already) that GHC does not do any management of floating-point control registers, so functions called through FFI should take care to clean up their floating-point state, otherwise the rounding mode can change unpredictably at the level of Haskell code." So, there're some FFI related issues even if we just leave the work to C. I'll also note that the current implementation of arithmetic on Double/Floats already has rounding mode issues: If someone does an FFI call to change the rounding mode via C (fgetround/fsetround functions) inside some IO block, then the arithmetic in that block cannot be "lifted" out even though it appears pure to GHC. Perhaps that should be filed as a bug too. -Levent. On Tue, May 5, 2015 at 8:07 AM, Carter Schonwald wrote: > Irk. If libm is busted when changing rounding modes, that puts a nasty > twist on things. > > I do agree that even if that hurdle is jumped, setting the local rounding > mode will have to be part of every green thread context switch. But if > libm is hosed that kinda makes adding that machinery a smudge pointless > until there's a good story for that. > > > On Tuesday, May 5, 2015, Edward Kmett wrote: > >> On Tue, May 5, 2015 at 7:22 AM, Scott Turner <2haskell at pkturner.org> >> wrote: >> >>> On 2015-05-05 00:54, Levent Erkok wrote: >>> > I can see at least two designs: >>> > >>> > * One where the rounding mode goes with the operation: `fpAdd >>> > RoundNearestTiesToEven 2.5 6.4`. This is the "cleanest" and the >>> > functional solution, but could get quite verbose; and might be costly >>> > if the implementation changes the rounding-mode at every issue. >>> > >>> > * The other is where the operations simply assume the >>> > RoundNearestTiesToEven, but we have lifted IO versions that can be >>> > modified with a "with" like construct: `withRoundingMode >>> > RoundTowardsPositive $ fpAddRM 2.5 6.4`. Note that `fpAddRM` (*not* >>> > `fpAdd` as before) will have to return some sort of a monadic value >>> > (probably in the IO monad) since it'll need to access the rounding >>> > mode currently active. >>> > >>> > Neither choice jumps out at me as the best one; and a hybrid might >>> > also be possible. I'd love to hear any insight you gain regarding >>> > rounding-modes during your experiment. >>> >>> The monadic alternative is more readily extensible to handle IEEE 754's >>> sticky flags: inexact, overflow, underflow, divide-by-zero, and invalid. >>> >> >> This gets messier than you'd think. Keep in mind we switch contexts >> within our own green threads constantly on shared system threads / >> capabilities so the current rounding mode, sticky flags, etc. would become >> something you'd have to hold per Thread, and then change proactively as >> threads migrate between CPUs / capabilities, which we're basically >> completely unaware of right now. >> >> This was what I learned when I tried my own hand at it and failed: >> >> http://hackage.haskell.org/package/rounding >> >> There found I gave up, and moved setting the rounding mode into custom >> primitives themselves. But even then you find other problems! The libm >> versions of almost every combinator doesn't just give slightly wrong >> answers when you switch rounding modes, it gives _completely_ wrong answers >> when you switch rounding modes. cos basically starts looking like a random >> number generator. This is rather amusing given that libm is the library >> that specified how to change the damn rounding mode and fixing this fact it >> was blocked by Ulrich Drepper when I last looked. >> >> Workarounds such as using crlibm >> exist, but isn't installed on >> most platforms and it would rather dramatically complicate the installation >> of ghc to incur the dependency. >> >> This is why I've switched to using MPFR for anything with known rounding >> modes and just paying a pretty big performance tax for correctness. (That >> and I'm working to release a library that does exact real arithmetic using >> trees of nested linear fractional transformations -- assuming I can figure >> out how to keep performance high enough.) >> >> -Edward >> >> >> >> -Edward >> >> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Tue May 5 16:18:16 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 05 May 2015 09:18:16 -0700 Subject: Proposal: liftData for Template Haskell In-Reply-To: References: <1429269330-sup-7487@sabre> Message-ID: <1430842122-sup-9486@sabre> Hello Michael, The Data instance for Map is a very interesting one indeed. Honestly, I really hate this instance, and I really hate generating a VarE, in part because there is no guarantee that fromList is defined in the same module as the type definition. But it gives a valuable clue about how dataToQa: maybe it's wrong for us to try to directly make names based on the Constr information that is given to us: instead, the generate TH code should be a call to /gunfold/ (with TH lifting the Constr). Of course, this would be even more inefficient, but maybe the optimizer can figure it out. Unfortunately, I don't actually know how to encode an arbitrary constructor invocation indirected through Data. Edward Excerpts from Michael Sloan's message of 2015-04-18 12:50:39 -0700: > +1 to liftData > > Removing 'Lift' altogether definitely isn't the way to go, though. As > Richard points out, we want to be able to lift more than just ADTs. Also > ADTs which hide their implementation can have either opaque Data instances, > no Data instance, or Data instances which involve using functions for > constructors. > > An example of the latter is Data.Map.Map's Data instance, which uses > 'fromList' as a constructor. This results in $(dataToExpQ (\_ -> Nothing) > (fromList [(1,2)])) causing the compiletime error "Illegal data constructor > name: ?fromList?". > > I think 'dataToExpQ' and related functions should be modified to handle > this case. It should be rather easy - if the constructor Name is > lowercase, generate a 'VarE' instead of a 'ConE'. I suppose this is a > separate proposal, but it came up when thinking about this proposal. > > -Michael > > On Fri, Apr 17, 2015 at 4:21 AM, Edward Z. Yang wrote: > > > I propose adding the following function to Language.Haskell.TH: > > > > -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which > > -- works for any type with a 'Data' instance. > > liftData :: Data a => a -> Q Exp > > liftData = dataToExpQ (const Nothing) > > > > I don't really know which submodule this should come from; > > since it uses 'dataToExpQ', you might put it in Language.Haskell.TH.Quote > > but arguably 'dataToExpQ' doesn't belong in this module either, > > and it only lives there because it is a useful function for defining > > quasiquoters and it was described in the quasiquoting paper. > > > > I might propose getting rid of the 'Lift' class entirely, but you > > might prefer that class since it doesn't go through SYB (and have > > the attendant slowdown). > > > > This mode of use of 'dataToExpQ' deserves more attention. > > > > Discussion period: 1 month > > > > Cheers, > > Edward > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > From vogt.adam at gmail.com Wed May 6 04:03:49 2015 From: vogt.adam at gmail.com (adam vogt) Date: Wed, 6 May 2015 00:03:49 -0400 Subject: Proposal: liftData for Template Haskell In-Reply-To: <1430842122-sup-9486@sabre> References: <1429269330-sup-7487@sabre> <1430842122-sup-9486@sabre> Message-ID: http://lpaste.net/1333635401797074944 is one way to use gunfold. A few years ago, gunfold was error for Data.Map. If there were other abstract data types that had gunfold = error "gunfold", generating a VarE/ConE would be better because the failure would be at compile time instead of at run time. Adam On Tue, May 5, 2015 at 12:18 PM, Edward Z. Yang wrote: > Hello Michael, > > The Data instance for Map is a very interesting one indeed. Honestly, I > really hate this instance, and I really hate generating a VarE, in part > because there is no guarantee that fromList is defined in the same > module as the type definition. > > But it gives a valuable clue about how dataToQa: maybe it's wrong for us > to try to directly make names based on the Constr information that is > given to us: instead, the generate TH code should be a call to /gunfold/ > (with TH lifting the Constr). Of course, this would be even more > inefficient, but maybe the optimizer can figure it out. > Unfortunately, I don't actually know how to encode an arbitrary > constructor invocation indirected through Data. > > Edward > > Excerpts from Michael Sloan's message of 2015-04-18 12:50:39 -0700: > > +1 to liftData > > > > Removing 'Lift' altogether definitely isn't the way to go, though. As > > Richard points out, we want to be able to lift more than just ADTs. Also > > ADTs which hide their implementation can have either opaque Data > instances, > > no Data instance, or Data instances which involve using functions for > > constructors. > > > > An example of the latter is Data.Map.Map's Data instance, which uses > > 'fromList' as a constructor. This results in $(dataToExpQ (\_ -> > Nothing) > > (fromList [(1,2)])) causing the compiletime error "Illegal data > constructor > > name: ?fromList?". > > > > I think 'dataToExpQ' and related functions should be modified to handle > > this case. It should be rather easy - if the constructor Name is > > lowercase, generate a 'VarE' instead of a 'ConE'. I suppose this is a > > separate proposal, but it came up when thinking about this proposal. > > > > -Michael > > > > On Fri, Apr 17, 2015 at 4:21 AM, Edward Z. Yang wrote: > > > > > I propose adding the following function to Language.Haskell.TH: > > > > > > -- | 'liftData' is a variant of 'lift' in the 'Lift' type class > which > > > -- works for any type with a 'Data' instance. > > > liftData :: Data a => a -> Q Exp > > > liftData = dataToExpQ (const Nothing) > > > > > > I don't really know which submodule this should come from; > > > since it uses 'dataToExpQ', you might put it in > Language.Haskell.TH.Quote > > > but arguably 'dataToExpQ' doesn't belong in this module either, > > > and it only lives there because it is a useful function for defining > > > quasiquoters and it was described in the quasiquoting paper. > > > > > > I might propose getting rid of the 'Lift' class entirely, but you > > > might prefer that class since it doesn't go through SYB (and have > > > the attendant slowdown). > > > > > > This mode of use of 'dataToExpQ' deserves more attention. > > > > > > Discussion period: 1 month > > > > > > Cheers, > > > Edward > > > _______________________________________________ > > > Libraries mailing list > > > Libraries at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Wed May 6 11:08:03 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 06 May 2015 13:08:03 +0200 Subject: RFC: "Native -XCPP" Proposal Message-ID: <87zj5i55gs.fsf@gmail.com> Hello *, As you may be aware, GHC's `{-# LANGUAGE CPP #-}` language extension currently relies on the system's C-compiler bundled `cpp` program to provide a "traditional mode" c-preprocessor. This has caused several problems in the past, since parsing Haskell code with a preprocessor mode designed for use with C's tokenizer has caused already quite some problems[1] in the past. I'd like to see GHC 7.12 adopt an implemntation of `-XCPP` that does not rely on the shaky system-`cpp` foundation. To this end I've created a wiki page https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCpp to describe the actual problems in more detail, and a couple of possible ways forward. Ideally, we'd simply integrate `cpphs` into GHC (i.e. "plan 2"). However, due to `cpp`s non-BSD3 license this should be discussed and debated since affects the overall-license of the GHC code-base, which may or may not be a problem to GHC's user-base (and that's what I hope this discussion will help to find out). So please go ahead and read the Wiki page... and then speak your mind! Thanks, HVR [1]: ...does anybody remember the issues Haskell packages (& GHC) encountered when Apple switched to the Clang tool-chain, thereby causing code using `-XCPP` to suddenly break due to subtly different `cpp`-semantics? From austin at well-typed.com Wed May 6 12:25:54 2015 From: austin at well-typed.com (Austin Seipp) Date: Wed, 6 May 2015 07:25:54 -0500 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <87zj5i55gs.fsf@gmail.com> References: <87zj5i55gs.fsf@gmail.com> Message-ID: On Wed, May 6, 2015 at 6:08 AM, Herbert Valerio Riedel wrote: > Hello *, > > As you may be aware, GHC's `{-# LANGUAGE CPP #-}` language extension > currently relies on the system's C-compiler bundled `cpp` program to > provide a "traditional mode" c-preprocessor. > > This has caused several problems in the past, since parsing Haskell code > with a preprocessor mode designed for use with C's tokenizer has caused > already quite some problems[1] in the past. I'd like to see GHC 7.12 > adopt an implemntation of `-XCPP` that does not rely on the shaky > system-`cpp` foundation. To this end I've created a wiki page > > https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCpp > > to describe the actual problems in more detail, and a couple of possible > ways forward. Ideally, we'd simply integrate `cpphs` into GHC > (i.e. "plan 2"). However, due to `cpp`s non-BSD3 license this should be > discussed and debated since affects the overall-license of the GHC > code-base, which may or may not be a problem to GHC's user-base (and > that's what I hope this discussion will help to find out). > > So please go ahead and read the Wiki page... and then speak your mind! Thanks for writing this up, btw! It's nice to put the mumblings we've had for a while down 'on paper'. > > Thanks, > HVR > > > [1]: ...does anybody remember the issues Haskell packages (& GHC) > encountered when Apple switched to the Clang tool-chain, thereby > causing code using `-XCPP` to suddenly break due to subtly > different `cpp`-semantics? There are two (major) differences I can list, although I can only provide some specific examples OTTOMH: 1) Clang is more strict wrt language specifications. For example, GCC is lenient and allows a space between a macro identifier and the parenthesis denoting a parameter list; so saying 'FOO (x, y)' is valid with GCC (where FOO is a macro), but not with Clang. Sometimes this trips up existing code, but I've mostly seen it in GHC itself. 2) The lexing rules for C and Haskell simply are not the same in general. For example, what should "FOO(a' + b')" parse to? Well, in Haskell, 'prime' is a valid component from an identifier and in this case the parse should be "a prime + b prime", but in C the ' character is identified as beginning the start of a single-character literal, and a strict preprocessor like Clang's will reject that. In practice, I think people have mostly just avoided arcane lexer behaviors that don't work, and the only reason this was never a problem was because GCC or some variant was always the 'standard' C compiler GHC could rely on. > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From takenobu.hs at gmail.com Wed May 6 12:38:52 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Wed, 6 May 2015 21:38:52 +0900 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hi Carter, Uh excuse me, you are BLAS master [1] ;-) And, thank you for teaching me about #numerical-haskell. I'll learn it. I like effective performance and abstraction. [1] http://hackage.haskell.org/package/linear-algebra-cblas Thank you, Takenobu 2015-05-05 22:52 GMT+09:00 Carter Schonwald : > Hey Takenobu, > Yes both are super useful! I've certainly used the Intel architecture > manual a few times and I wrote/maintain (in my biased opinion ) one of the > nicer blas ffi bindings on hackage. > > It's worth mentioning that for haskellers who are interested in either > mathematical computation or performance engineering, on freenode the > #numerical-haskell channel is pretty good. Though again I'm a bit biased > about the nice community there > > > On Tuesday, May 5, 2015, Takenobu Tani wrote: > >> Hi, >> >> Is this useful? >> >> BLAS (Basic Linear Algebra Subprograms) >> http://www.netlib.org/blas/ >> http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms >> >> Regards, >> Takenobu >> >> >> >> 2015-05-05 22:06 GMT+09:00 Takenobu Tani : >> >>> Hi, >>> >>> Related informatioln. >>> >>> Intel FMA's information(hardware dependent) is here: >>> >>> Chapter 11 >>> >>> Intel 64 and IA-32 Architectures Optimization Reference Manual >>> >>> http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf >>> >>> >>> Of course, it is information that depends on the particular processor. >>> And abstraction level is too low. >>> >>> PS >>> I like Haskell's abstruct naming convention more than "fma":-) >>> >>> Regards, >>> Takenobu >>> >>> >>> >>> 2015-05-05 11:54 GMT+09:00 Carter Schonwald >>> : >>> >>>> pardon the wall of text everyone, but I really want some FMA tooling :) >>>> >>>> I am going to spend some time later this week and next adding FMA >>>> primops to GHC and playing around with different ways to add it to Num >>>> (which seems pretty straightforward, though I think we'd all agree it >>>> shouldn't be exported by Prelude). And then depending on how Yitzchak's >>>> reproposal of that exactly goes (or some iteration thereof) we can get >>>> something useful/usable into 7.12 >>>> >>>> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >>>> numbers*, and a higher precision FMA for *approximate numbers *(*ie >>>> floating point*), and where I cant sanely use FMA if it lives >>>> anywhere but Num unless I rub typeable everywhere and do runtime type >>>> checks for applicable floating point types, which kinda destroys >>>> parametrically in engineering nice things. >>>> >>>> @levent: ghc doesn't do any optimization for floating point arithmetic >>>> (aside from 1-2 very simple things that are possibly questionable), and >>>> until ghc has support for precisly emulating high precision floating point >>>> computation in a portable way, probably wont have any interesting floating >>>> point computation. Mandating that fma a b c === a*b+c for inexact number >>>> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >>>> is conservative about optimizing floating point, because it makes doing >>>> correct stability analyses tractable! I look forward to the day that GHC >>>> gets a bit more sophisticated about optimizing floating point computation, >>>> but that day is still a ways off. >>>> >>>> relatedly: FMA for float and double are not generally going to be >>>> faster than the individual primitive operations, merely more accurate when >>>> used carefully. >>>> >>>> point being*, i'm +1 on adding some manner of FMA operations to Num* >>>> (only sane place to put it where i can actually use it for a general use >>>> library) and i dont really care if we name it fusedMultiplyAdd, >>>> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >>>> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >>>> experts yet easy to google for the curious. >>>> >>>> to repeat: i'm going to do some leg work so that the double and float >>>> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >>>> about that, and they agree to its value, and thats a decision outside of >>>> scope of the libraries purview), and I do hope we can to a consensus about >>>> putting it in Num so that expert library authors can upgrade the guarantees >>>> that they can provide end users without imposing any breaking changes to >>>> end users. >>>> >>>> A number of folks have brought up "but Num is broken" as a counter >>>> argument to adding FMA support to Num. I emphatically agree num is borken >>>> :), BUT! I do also believe that fixing up Num prelude has the burden of >>>> providing a whole cloth design for an alternative design that we can get >>>> broad consensus/adoption with. That will happen by dint of actually >>>> experimentation and usage. >>>> >>>> Point being, adding FMA doesn't further entrench current Num any more >>>> than it already is, it just provides expert library authors with a >>>> transparent way of improving the experience of their users with a free >>>> upgrade in answer accuracy if used carefully. Additionally, when Num's >>>> "semiring ish equational laws" are framed with respect to approximate >>>> forwards/backwards stability, there is a perfectly reasonable law for FMA. >>>> I am happy to spend some time trying to write that up more precisely IFF >>>> that will tilt those in opposition to being in favor. >>>> >>>> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* >>>> as a method therein for Num. If that constitutes a different and *more >>>> palatable proposal* than what people have articulated so far (by >>>> discouraging casual use by dint of hiding) then I am happy to kick off a >>>> new thread with that concrete design choice. >>>> >>>> If theres a counter argument thats a bit more substantive than "Num is >>>> for exact arithmetic" or "Num is wrong" that will sway me to the other >>>> side, i'm all ears, but i'm skeptical of that. >>>> >>>> I emphatically support those who are displeased with Num to prototype >>>> some alternative designs in userland, I do think it'd be great to figure >>>> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >>>> years, but again any such proposal really needs to be realized whole cloth >>>> before it makes its way to being a libraries list proposal. >>>> >>>> >>>> again, pardon the wall of text, i just really want to have nice things >>>> :) >>>> -Carter >>>> >>>> >>>> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: >>>> >>>>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>>>> Double/Float. It should only be an "optmization" (as in modular >>>>> arithmetic), not a semantic changing operation. Thus justifying the >>>>> optimization. >>>>> >>>>> "fma" should be the "more-precise" version available for Float/Double. >>>>> I don't think it makes sense to have "fma" for other types. That's why I'm >>>>> advocating "mulAdd" to be part of "Num" for optimization purposes; and >>>>> "fma" reserved for true IEEE754 types and semantics. >>>>> >>>>> I understand that Edward doesn't like this as this requires a >>>>> different class; but really, that's the price to pay if we claim Haskell >>>>> has proper support for IEEE754 semantics. (Which I think it should.) The >>>>> operation is just different. It also should account for the rounding-modes >>>>> properly. >>>>> >>>>> I think we can pull this off just fine; and Haskell can really lead >>>>> the pack here. The situation with floats is even worse in other languages. >>>>> This is our chance to make a proper implementation, and we have the right >>>>> tools to do so. >>>>> >>>>> -Levent. >>>>> >>>>> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >>>>> >>>>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>>>> >>>>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>>>> precision does matter should be part of a different class. What Edward and >>>>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>>>> but one that goes faster. >>>>>> >>>>>> No, it looks to me that Edward wants to have a more precise operation >>>>>> in Num: >>>>>> >>>>>> I'd have to make a second copy of the function to even try to see the >>>>>> precision win. >>>>>> >>>>>> Unless I'm wrong, you can't have the following things simultaneously: >>>>>> >>>>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is >>>>>> more precise) >>>>>> 3. Num operations for Double (addition and multiplication) always >>>>>> conform to IEEE754 >>>>>> >>>>>> The true IEEE754 variants where precision does matter should be >>>>>> part of a different class. >>>>>> >>>>>> So, does it mean that you're fine with not having point #3 because >>>>>> people who need it would be able to use a separate class for IEEE754 floats? >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Libraries mailing list >>>>> Libraries at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>> >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Wed May 6 13:03:38 2015 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Wed, 6 May 2015 15:03:38 +0200 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> Message-ID: On 6 May 2015 at 14:25, Austin Seipp wrote: > 2) The lexing rules for C and Haskell simply are not the same in > general. One area where this is irritating is that it makes it impossible to use Haskell multiline strings together with CPP. From asr at eafit.edu.co Wed May 6 13:43:03 2015 From: asr at eafit.edu.co (=?UTF-8?B?QW5kcsOpcyBTaWNhcmQtUmFtw61yZXo=?=) Date: Wed, 6 May 2015 08:43:03 -0500 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <201505061338.16090.jan.stolarek@p.lodz.pl> References: <87zj5i55gs.fsf@gmail.com> <201505061338.16090.jan.stolarek@p.lodz.pl> Message-ID: I want to emphasize that cpphs is actively maintained as it's pointed out in https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCp. The Agda team has found some cpphs bugs which have been *quickly* fixed by cpphs's author, Malcolm Wallace. Unfortunately I have not been able to track down the problem mentioned by Janek Stolarek. On 6 May 2015 at 06:38, Jan Stolarek wrote: > One thing to keep in mind is that while cpphs will solve some problems of system-cpp, it will also > bring problems of its own. I, for one, have run into problems with it when installing Agda. There > is a very long thread here: > > https://lists.chalmers.se/pipermail/agda/2014/006975.html > > and twice as more on my private inbox. We've reached no conclusion about the cause and the only > solution was to use system-cpp. > > Regarding licensing issues: perhaps we should simply ask Malcolm Wallace if he would consider > changing the license for the sake of GHC? Or perhaps he could grant a custom-tailored license to > the GHC project? After all, the project page [1] says: " If that's a problem for you, contact me > to make other arrangements." > > Janek > > [1] http://projects.haskell.org/cpphs/ > > Dnia ?roda, 6 maja 2015, Herbert Valerio Riedel napisa?: >> Hello *, >> >> As you may be aware, GHC's `{-# LANGUAGE CPP #-}` language extension >> currently relies on the system's C-compiler bundled `cpp` program to >> provide a "traditional mode" c-preprocessor. >> >> This has caused several problems in the past, since parsing Haskell code >> with a preprocessor mode designed for use with C's tokenizer has caused >> already quite some problems[1] in the past. I'd like to see GHC 7.12 >> adopt an implemntation of `-XCPP` that does not rely on the shaky >> system-`cpp` foundation. To this end I've created a wiki page >> >> https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCpp >> >> to describe the actual problems in more detail, and a couple of possible >> ways forward. Ideally, we'd simply integrate `cpphs` into GHC >> (i.e. "plan 2"). However, due to `cpp`s non-BSD3 license this should be >> discussed and debated since affects the overall-license of the GHC >> code-base, which may or may not be a problem to GHC's user-base (and >> that's what I hope this discussion will help to find out). >> >> So please go ahead and read the Wiki page... and then speak your mind! >> >> >> Thanks, >> HVR >> >> >> [1]: ...does anybody remember the issues Haskell packages (& GHC) >> encountered when Apple switched to the Clang tool-chain, thereby >> causing code using `-XCPP` to suddenly break due to subtly >> different `cpp`-semantics? >> >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > > --- > Politechnika ??dzka > Lodz University of Technology > > Tre?? tej wiadomo?ci zawiera informacje przeznaczone tylko dla adresata. > Je?eli nie jeste?cie Pa?stwo jej adresatem, b?d? otrzymali?cie j? przez pomy?k? > prosimy o powiadomienie o tym nadawcy oraz trwa?e jej usuni?cie. > > This email contains information intended solely for the use of the individual to whom it is addressed. > If you are not the intended recipient or if you have received this message in error, > please notify the sender and delete it from your system. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Andr?s From svenpanne at gmail.com Wed May 6 14:32:11 2015 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 6 May 2015 16:32:11 +0200 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> Message-ID: 2015-05-06 16:21 GMT+02:00 Bardur Arantsson : > +1, I'll wager that the vast majority of usages are just for version > range checks. The OpenGL-related packages used macros to generate some binding magic (a "foreign import" plus some helper functions for each API entry), not just range checks. I had serious trouble when Apple switched to clang, so as a quick fix, the macro-expanded (via GCC's CPP) sources had been checked in. :-P Nowadays the binding is generated from the OpenGL XML registry file, so this is not an issue anymore. > If there are packages that require more, they could just keep using the > system-cpp or, I, guess cpphs if it gets baked into GHC. Like you, I'd > want to see real evidence that that's actually worth the > effort/complication. Simply relying on the system CPP doesn't work due to the various differences between GCC's CPP and the one from clang, see e.g. https://github.com/haskell-opengl/OpenGLRaw/issues/18#issuecomment-31428380. Ignoring the problem doesn't make it go away... ;-) Note that we still need CPP to handle the various calling conventions on the different platforms when the FFI is used, so it's not only range checks, see e.g. https://github.com/haskell-opengl/OpenGLRaw/blob/master/OpenGLRaw.cabal#L588. From allbery.b at gmail.com Wed May 6 15:28:45 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 6 May 2015 11:28:45 -0400 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> Message-ID: On Wed, May 6, 2015 at 10:59 AM, Bardur Arantsson wrote: > (I'm not going to be doing any of the work, so this is just armchairing, > but this seems like an 80/20 solution would be warranted.) > Only if you're convinced it will remain 80/20 for the foreseeable future. I do not want to bet on Linux always being gcc (and dislike the One True Platform-ism that line of thought encourages). -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Wed May 6 15:33:35 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 6 May 2015 11:33:35 -0400 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <87ioc5vi81.fsf@feelingofgreen.ru> References: <87zj5i55gs.fsf@gmail.com> <87ioc5vi81.fsf@feelingofgreen.ru> Message-ID: On Wed, May 6, 2015 at 11:27 AM, Kosyrev Serge <_deepfire at feelingofgreen.ru> wrote: > Why *shouldn't* TH fill that role? What can be done about it? For one, it's difficult to make it available in cross compilers (granted, work is being done on this) and not available on some platforms (ARM has been a problem, dunno if it currently is). For another, I don't think you can currently control things like imports or LANGUAGE pragmas --- and as TH is currently constructed it's not clear that you could do so, or that you could do so in a way that is sane for users. This is not to say that I like cpp --- I'd like it a lot more if it weren't actually using a C preprocessor that is not actually under our control or guaranteed to be compatible with Haskell --- but it does provide a "meta" in a different dimension than TH does. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Wed May 6 16:09:00 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 06 May 2015 18:09:00 +0200 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <20150506153605.GD1459@xobs-novena> (Stephen Paul Weber's message of "Wed, 6 May 2015 10:36:05 -0500") References: <87zj5i55gs.fsf@gmail.com> <20150506153605.GD1459@xobs-novena> Message-ID: <87h9rp663n.fsf@gmail.com> On 2015-05-06 at 17:36:05 +0200, Stephen Paul Weber wrote: >>As you may be aware, GHC's `{-# LANGUAGE CPP #-}` language extension >>currently relies on the system's C-compiler bundled `cpp` program to >>provide a "traditional mode" c-preprocessor. > > Yes. This is one of my favourite things in GHC-land -- that an > existing, good-enough, standardised, and widely-deployed solution was > chosen over a NiH reinvention of preprocessing. This allows other > Haskell compilers to support CPP on basically any system (since cpp is > so standard) without much effort, or even if the compiler does not > support {-# LANGUAGE CPP -#} the user can easily run `cpp` over the > source files themselves before feeding the source into the compiler. > > Because it is a real `cpp` being used, the developmer must take care > to follow the CPP syntax in the file that will then be transformed > into Haskell by `cpp` in the same way that C, C++, and other > developers have to take extra care (especially around use of # and > end-of-line \) when using `cpp`, but this is the normal state of > affairs for a secondary preprocessor step. As a benefit, the source > code will be processable by standard `cpp` implementations available > for virtually every platform. > > In short, the current solution provides a very robust and portable way > to do pre-compile preprocessing, and I like it very much. The problem I have with that line of argument is that we're using the so-called 'traditional mode' of `cpp`[1] for which afaik there is no written down common specification different implementations commit to adhere to (well, that's because 'traditional mode' refers to some vague implementation-specific "pre-standard" cpp semantics). And how is the developer supposed to take care to follow the (traditional mode) CPP syntax, if he can't test it easily with all potentially used (traditional-mode) `cpp`s out there? This already has led to problems with Clang's cpp vs GCC's cpp. Moreover, I was under the impression that it's only a matter of time till `traditional mode` support may be dropped from C compiler toolchains. Otoh, we can't use an ISO C spec conforming c-preprocessor, as that would conflict even more heavily w/ Haskell's grammar to the point of being inpractical. [1]: https://gcc.gnu.org/onlinedocs/cpp/Traditional-Mode.html From allbery.b at gmail.com Wed May 6 16:47:28 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 6 May 2015 12:47:28 -0400 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <20150506153605.GD1459@xobs-novena> References: <87zj5i55gs.fsf@gmail.com> <20150506153605.GD1459@xobs-novena> Message-ID: On Wed, May 6, 2015 at 11:36 AM, Stephen Paul Weber < singpolyma at singpolyma.net> wrote: > Yes. This is one of my favourite things in GHC-land -- that an existing, > good-enough, standardised, and widely-deployed solution was chosen over a > NiH reinvention of preprocessing I have to assume my irony detector is broken as well. Or maybe I should just assume that "all the world's Linux with gcc" is assumed to be forever true and forever reliable by "all right-thinking people" so let's just sweep the nonissue under the rug because it can oh so obviously never be a real issue.... Because I had to face this back a couple decades ago, when my employer ported an application written in a 4GL (database language) to SCO Unix. The 4GL assumed cpp was the ever reliable pcc one and broke very badly when SCO used one integrated into its lexer (making it even more tightly wedded to C syntax than clang's). Eventually we replaced its cpp with a wrapper that ran m4 and redid everything else in m4's syntax. Which is why I was always a bit worried about ghc relying on cpp, was unsurprised when clang caused issues, and am rather annoyed that there are people who believe that they can just ignore it because REAL users will always be on Linux with gcc and all them furriners using weird OSes like OS X and FreeBSD can safely be ignored with their not-the-One-True-OS-and-compiler platforms. Additional historical note that I assume True Believers will ignore as meaningless: X11 used to make the same assumption that cpp was always and forever guaranteed to be friendly to non-C and this still shows at times in things like xrdb resource databases. They did accept the inevitable and (mostly) stop abusing it that way, and are now moving away from imake which likewise assumes it's safe to use cpp on Makefiles. (And yes, I encounter the same inability to comprehend or accept change there.) -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From howard_b_golden at yahoo.com Wed May 6 16:53:08 2015 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Wed, 6 May 2015 16:53:08 +0000 (UTC) Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <87a8xhvh48.fsf@feelingofgreen.ru> References: <87a8xhvh48.fsf@feelingofgreen.ru> Message-ID: <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> At the risk of antagonizing some (most? all?) of you, how about... -XCPP stands for the native CPP -XGNUCPP stands for GNU's GCC CPP -XClangCPP stands for Clang's CPP -XCPPHS stands for CPPHS ... with the hope that TH is the future? Howard From carter.schonwald at gmail.com Wed May 6 19:42:09 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 6 May 2015 15:42:09 -0400 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hblas is what I recommend https://hackage.haskell.org/package/hblas Doesn't have everything yet. But the design is a lite better. On Wednesday, May 6, 2015, Takenobu Tani wrote: > Hi Carter, > > Uh excuse me, you are BLAS master [1] ;-) > > And, thank you for teaching me about #numerical-haskell. > I'll learn it. I like effective performance and abstraction. > > [1] http://hackage.haskell.org/package/linear-algebra-cblas > > Thank you, > Takenobu > > > 2015-05-05 22:52 GMT+09:00 Carter Schonwald >: > >> Hey Takenobu, >> Yes both are super useful! I've certainly used the Intel architecture >> manual a few times and I wrote/maintain (in my biased opinion ) one of the >> nicer blas ffi bindings on hackage. >> >> It's worth mentioning that for haskellers who are interested in either >> mathematical computation or performance engineering, on freenode the >> #numerical-haskell channel is pretty good. Though again I'm a bit biased >> about the nice community there >> >> >> On Tuesday, May 5, 2015, Takenobu Tani > > wrote: >> >>> Hi, >>> >>> Is this useful? >>> >>> BLAS (Basic Linear Algebra Subprograms) >>> http://www.netlib.org/blas/ >>> http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms >>> >>> Regards, >>> Takenobu >>> >>> >>> >>> 2015-05-05 22:06 GMT+09:00 Takenobu Tani : >>> >>>> Hi, >>>> >>>> Related informatioln. >>>> >>>> Intel FMA's information(hardware dependent) is here: >>>> >>>> Chapter 11 >>>> >>>> Intel 64 and IA-32 Architectures Optimization Reference Manual >>>> >>>> http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf >>>> >>>> >>>> Of course, it is information that depends on the particular processor. >>>> And abstraction level is too low. >>>> >>>> PS >>>> I like Haskell's abstruct naming convention more than "fma":-) >>>> >>>> Regards, >>>> Takenobu >>>> >>>> >>>> >>>> 2015-05-05 11:54 GMT+09:00 Carter Schonwald >>> >: >>>> >>>>> pardon the wall of text everyone, but I really want some FMA tooling >>>>> :) >>>>> >>>>> I am going to spend some time later this week and next adding FMA >>>>> primops to GHC and playing around with different ways to add it to Num >>>>> (which seems pretty straightforward, though I think we'd all agree it >>>>> shouldn't be exported by Prelude). And then depending on how Yitzchak's >>>>> reproposal of that exactly goes (or some iteration thereof) we can get >>>>> something useful/usable into 7.12 >>>>> >>>>> i have codes (ie *dotproducts*!!!!!) where a faster direct FMA for *exact >>>>> numbers*, and a higher precision FMA for *approximate numbers *(*ie >>>>> floating point*), and where I cant sanely use FMA if it lives >>>>> anywhere but Num unless I rub typeable everywhere and do runtime type >>>>> checks for applicable floating point types, which kinda destroys >>>>> parametrically in engineering nice things. >>>>> >>>>> @levent: ghc doesn't do any optimization for floating point arithmetic >>>>> (aside from 1-2 very simple things that are possibly questionable), and >>>>> until ghc has support for precisly emulating high precision floating point >>>>> computation in a portable way, probably wont have any interesting floating >>>>> point computation. Mandating that fma a b c === a*b+c for inexact number >>>>> datatypes doesn't quite make sense to me. Relatedly, its a GOOD thing ghc >>>>> is conservative about optimizing floating point, because it makes doing >>>>> correct stability analyses tractable! I look forward to the day that GHC >>>>> gets a bit more sophisticated about optimizing floating point computation, >>>>> but that day is still a ways off. >>>>> >>>>> relatedly: FMA for float and double are not generally going to be >>>>> faster than the individual primitive operations, merely more accurate when >>>>> used carefully. >>>>> >>>>> point being*, i'm +1 on adding some manner of FMA operations to Num* >>>>> (only sane place to put it where i can actually use it for a general use >>>>> library) and i dont really care if we name it fusedMultiplyAdd, >>>>> multiplyAndAdd accursedFusionOfSemiRingOperations, or fma. i'd favor >>>>> "fusedMultiplyAdd" if we want a descriptive name that will be familiar to >>>>> experts yet easy to google for the curious. >>>>> >>>>> to repeat: i'm going to do some leg work so that the double and float >>>>> prims are portably exposed by ghc-prims (i've spoken with several ghc devs >>>>> about that, and they agree to its value, and thats a decision outside of >>>>> scope of the libraries purview), and I do hope we can to a consensus about >>>>> putting it in Num so that expert library authors can upgrade the guarantees >>>>> that they can provide end users without imposing any breaking changes to >>>>> end users. >>>>> >>>>> A number of folks have brought up "but Num is broken" as a counter >>>>> argument to adding FMA support to Num. I emphatically agree num is borken >>>>> :), BUT! I do also believe that fixing up Num prelude has the burden of >>>>> providing a whole cloth design for an alternative design that we can get >>>>> broad consensus/adoption with. That will happen by dint of actually >>>>> experimentation and usage. >>>>> >>>>> Point being, adding FMA doesn't further entrench current Num any more >>>>> than it already is, it just provides expert library authors with a >>>>> transparent way of improving the experience of their users with a free >>>>> upgrade in answer accuracy if used carefully. Additionally, when Num's >>>>> "semiring ish equational laws" are framed with respect to approximate >>>>> forwards/backwards stability, there is a perfectly reasonable law for FMA. >>>>> I am happy to spend some time trying to write that up more precisely IFF >>>>> that will tilt those in opposition to being in favor. >>>>> >>>>> I dont need FMA to be exposed by *prelude/base*, merely by *GHC.Num* >>>>> as a method therein for Num. If that constitutes a different and *more >>>>> palatable proposal* than what people have articulated so far (by >>>>> discouraging casual use by dint of hiding) then I am happy to kick off a >>>>> new thread with that concrete design choice. >>>>> >>>>> If theres a counter argument thats a bit more substantive than "Num is >>>>> for exact arithmetic" or "Num is wrong" that will sway me to the other >>>>> side, i'm all ears, but i'm skeptical of that. >>>>> >>>>> I emphatically support those who are displeased with Num to prototype >>>>> some alternative designs in userland, I do think it'd be great to figure >>>>> out a new Num prelude we can migrate Haskell / GHC to over the next 2-5 >>>>> years, but again any such proposal really needs to be realized whole cloth >>>>> before it makes its way to being a libraries list proposal. >>>>> >>>>> >>>>> again, pardon the wall of text, i just really want to have nice things >>>>> :) >>>>> -Carter >>>>> >>>>> >>>>> On Mon, May 4, 2015 at 2:22 PM, Levent Erkok wrote: >>>>> >>>>>> I think `mulAdd a b c` should be implemented as `a*b+c` even for >>>>>> Double/Float. It should only be an "optmization" (as in modular >>>>>> arithmetic), not a semantic changing operation. Thus justifying the >>>>>> optimization. >>>>>> >>>>>> "fma" should be the "more-precise" version available for >>>>>> Float/Double. I don't think it makes sense to have "fma" for other types. >>>>>> That's why I'm advocating "mulAdd" to be part of "Num" for optimization >>>>>> purposes; and "fma" reserved for true IEEE754 types and semantics. >>>>>> >>>>>> I understand that Edward doesn't like this as this requires a >>>>>> different class; but really, that's the price to pay if we claim Haskell >>>>>> has proper support for IEEE754 semantics. (Which I think it should.) The >>>>>> operation is just different. It also should account for the rounding-modes >>>>>> properly. >>>>>> >>>>>> I think we can pull this off just fine; and Haskell can really lead >>>>>> the pack here. The situation with floats is even worse in other languages. >>>>>> This is our chance to make a proper implementation, and we have the right >>>>>> tools to do so. >>>>>> >>>>>> -Levent. >>>>>> >>>>>> On Mon, May 4, 2015 at 10:58 AM, Artyom wrote: >>>>>> >>>>>>> On 05/04/2015 08:49 PM, Levent Erkok wrote: >>>>>>> >>>>>>> Artyom: That's precisely the point. The true IEEE754 variants where >>>>>>> precision does matter should be part of a different class. What Edward and >>>>>>> Yitz want is an "optimized" multiply-add where the semantics is the same >>>>>>> but one that goes faster. >>>>>>> >>>>>>> No, it looks to me that Edward wants to have a more precise >>>>>>> operation in Num: >>>>>>> >>>>>>> I'd have to make a second copy of the function to even try to see >>>>>>> the precision win. >>>>>>> >>>>>>> Unless I'm wrong, you can't have the following things simultaneously: >>>>>>> >>>>>>> 1. the compiler is free to substitute *a+b*c* with *mulAdd a b c* >>>>>>> 2. *mulAdd a b c* is implemented as *fma* for Doubles (and is >>>>>>> more precise) >>>>>>> 3. Num operations for Double (addition and multiplication) >>>>>>> always conform to IEEE754 >>>>>>> >>>>>>> The true IEEE754 variants where precision does matter should be >>>>>>> part of a different class. >>>>>>> >>>>>>> So, does it mean that you're fine with not having point #3 because >>>>>>> people who need it would be able to use a separate class for IEEE754 floats? >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Libraries mailing list >>>>>> Libraries at haskell.org >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Libraries mailing list >>>>> Libraries at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>>> >>>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Wed May 6 19:55:56 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 6 May 2015 21:55:56 +0200 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> <201505061338.16090.jan.stolarek@p.lodz.pl> Message-ID: <201505062155.56858.jan.stolarek@p.lodz.pl> > I want to emphasize that cpphs is actively maintained as it's pointed > out in https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCp. The > Agda team has found some cpphs bugs which have been *quickly* fixed by > cpphs's author, Malcolm Wallace. Yes. It was not my intention to imply in any way that cpphs suffers from maintanance issues. > Unfortunately I have not been able to track down the problem mentioned by Janek Stolarek. Yes, and that's what gets me worried. I suppose the problem was somehow related to my locale settings although we were unable to track down the cause. I also recall someone else reported being affected by the same problem. So, until that problem is solved I would say it is a blocker as it would essentialy make development of GHC impossible for some people. Janek > > On 6 May 2015 at 06:38, Jan Stolarek wrote: > > One thing to keep in mind is that while cpphs will solve some problems of > > system-cpp, it will also bring problems of its own. I, for one, have run > > into problems with it when installing Agda. There is a very long thread > > here: > > > > https://lists.chalmers.se/pipermail/agda/2014/006975.html > > > > and twice as more on my private inbox. We've reached no conclusion about > > the cause and the only solution was to use system-cpp. > > > > Regarding licensing issues: perhaps we should simply ask Malcolm Wallace > > if he would consider changing the license for the sake of GHC? Or perhaps > > he could grant a custom-tailored license to the GHC project? After all, > > the project page [1] says: " If that's a problem for you, contact me to > > make other arrangements." > > > > Janek > > > > [1] http://projects.haskell.org/cpphs/ > > > > Dnia ?roda, 6 maja 2015, Herbert Valerio Riedel napisa?: > >> Hello *, > >> > >> As you may be aware, GHC's `{-# LANGUAGE CPP #-}` language extension > >> currently relies on the system's C-compiler bundled `cpp` program to > >> provide a "traditional mode" c-preprocessor. > >> > >> This has caused several problems in the past, since parsing Haskell code > >> with a preprocessor mode designed for use with C's tokenizer has caused > >> already quite some problems[1] in the past. I'd like to see GHC 7.12 > >> adopt an implemntation of `-XCPP` that does not rely on the shaky > >> system-`cpp` foundation. To this end I've created a wiki page > >> > >> https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCpp > >> > >> to describe the actual problems in more detail, and a couple of possible > >> ways forward. Ideally, we'd simply integrate `cpphs` into GHC > >> (i.e. "plan 2"). However, due to `cpp`s non-BSD3 license this should be > >> discussed and debated since affects the overall-license of the GHC > >> code-base, which may or may not be a problem to GHC's user-base (and > >> that's what I hope this discussion will help to find out). > >> > >> So please go ahead and read the Wiki page... and then speak your mind! > >> > >> > >> Thanks, > >> HVR > >> > >> > >> [1]: ...does anybody remember the issues Haskell packages (& GHC) > >> encountered when Apple switched to the Clang tool-chain, thereby > >> causing code using `-XCPP` to suddenly break due to subtly > >> different `cpp`-semantics? > >> > >> _______________________________________________ > >> Glasgow-haskell-users mailing list > >> Glasgow-haskell-users at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > > --- > > Politechnika ??dzka > > Lodz University of Technology > > > > Tre?? tej wiadomo?ci zawiera informacje przeznaczone tylko dla adresata. > > Je?eli nie jeste?cie Pa?stwo jej adresatem, b?d? otrzymali?cie j? przez > > pomy?k? prosimy o powiadomienie o tym nadawcy oraz trwa?e jej usuni?cie. > > > > This email contains information intended solely for the use of the > > individual to whom it is addressed. If you are not the intended recipient > > or if you have received this message in error, please notify the sender > > and delete it from your system. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs --- Politechnika ??dzka Lodz University of Technology Tre?? tej wiadomo?ci zawiera informacje przeznaczone tylko dla adresata. Je?eli nie jeste?cie Pa?stwo jej adresatem, b?d? otrzymali?cie j? przez pomy?k? prosimy o powiadomienie o tym nadawcy oraz trwa?e jej usuni?cie. This email contains information intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or if you have received this message in error, please notify the sender and delete it from your system. From asr at eafit.edu.co Wed May 6 21:47:50 2015 From: asr at eafit.edu.co (=?UTF-8?B?QW5kcsOpcyBTaWNhcmQtUmFtw61yZXo=?=) Date: Wed, 6 May 2015 16:47:50 -0500 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <201505062155.56858.jan.stolarek@p.lodz.pl> References: <87zj5i55gs.fsf@gmail.com> <201505061338.16090.jan.stolarek@p.lodz.pl> <201505062155.56858.jan.stolarek@p.lodz.pl> Message-ID: Hi Janek, On 6 May 2015 at 14:55, Jan Stolarek wrote: > Yes, and that's what gets me worried. I suppose the problem was somehow related to my locale > settings although we were unable to track down the cause. I also recall someone else reported > being affected by the same problem. AFIK, the only cpphs-Agda open problem is your problem. I would like to know if anyone else has some problem. If so, I propose to move the discussion to the Agda developers list (agda-dev at lists.chalmers.se). Best, -- Andr?s From hvriedel at gmail.com Thu May 7 07:47:20 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 07 May 2015 09:47:20 +0200 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> (Howard B. Golden's message of "Wed, 6 May 2015 16:53:08 +0000 (UTC)") References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> Message-ID: <87sib896d3.fsf@gnu.org> On 2015-05-06 at 18:53:08 +0200, Howard B. Golden wrote: > At the risk of antagonizing some (most? all?) of you, how about... > > -XCPP stands for the native CPP > -XGNUCPP stands for GNU's GCC CPP > -XClangCPP stands for Clang's CPP > -XCPPHS stands for CPPHS Assuming this was a serious suggestion, the benefit is that you could clearly mark what CPP you want to develop against, but OTOH, we'd lose backward compat w/ Haskell compilers only knowing about the old -XCPP but not the other new variants of the language-pragma. Moreover, there can now be packages that require clang-cpp, while others require gcc-cpp, and I don't think it can be assumed that both are available on every GHC installation. So it could cause packages to fail compiling simply because the respective CPP-flavor is missing. On the bright side, this would maybe give us the opportunity to coin the new term "CPP Hell" =) Cheers, hvr From takenobu.hs at gmail.com Thu May 7 12:40:12 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Thu, 7 May 2015 21:40:12 +0900 Subject: Proposal: Add "fma" to the RealFloat class In-Reply-To: References: <1430727263.1930.9.camel@joachim-breitner.de> <5547AF0B.2040201@artyom.me> <5547B33D.4000205@artyom.me> Message-ID: Hi Carter, Thank you for teaching me again. I'll learn by it. well-established:-) Thank you, Takenobu 2015-05-07 4:42 GMT+09:00 Carter Schonwald : > Hblas is what I recommend > https://hackage.haskell.org/package/hblas > > Doesn't have everything yet. But the design is a lite better. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Thu May 7 19:54:51 2015 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Thu, 07 May 2015 21:54:51 +0200 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <201505061338.16090.jan.stolarek@p.lodz.pl> (Jan Stolarek's message of "Wed, 6 May 2015 13:38:16 +0200") References: <87zj5i55gs.fsf@gmail.com> <201505061338.16090.jan.stolarek@p.lodz.pl> Message-ID: <87h9ro5fjo.fsf@gnu.org> On 2015-05-06 at 13:38:16 +0200, Jan Stolarek wrote: [...] > Regarding licensing issues: perhaps we should simply ask Malcolm > Wallace if he would consider changing the license for the sake of GHC? > Or perhaps he could grant a custom-tailored license to the GHC > project? After all, the project page [1] says: " If that's a problem > for you, contact me to make other arrangements." Fyi, Neil talked to him[1]: | I talked to Malcolm. His contention is that it doesn't actually change | the license of the ghc package. As such, it's just a single extra | license to add to a directory full of licenses, which is no big deal. [1]: http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1e5n3 From simonpj at microsoft.com Thu May 7 20:34:05 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 May 2015 20:34:05 +0000 Subject: Cabal and simultaneous installations of the same package In-Reply-To: References: <68326f3ebbd943768effe6b0f2ff522c@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <500eca8d0bb845cea05a03a19a3dffa2@DB4PR30MB030.064d.mgd.msft.net> Dear Cabal developers I guess everyone is busy, but I feel a bit stuck on knowing how to make progress on this thread. Thanks Simon From: Simon Peyton Jones Sent: 20 April 2015 09:12 To: Simon Peyton Jones; cabal-devel at haskell.org Cc: haskell-infrastructure at community.galois.com; Haskell Libraries; ghc-devs at haskell.org Subject: RE: Cabal and simultaneous installations of the same package Friends We started this thread (below) a month ago, but it has once more run out of steam. The last contribution was from Vishal Agrawal I am already planning to do a GSoC project based on it with a slightly larger aim. You can find my work in progress proposal at https://gist.github.com/fugyk/37510958b52589737274. Also I have written a patch to make cabal non-destructive at https://github.com/fugyk/cabal/commit/45ec5edbaada1fd063c67d6109e69efa0e732e6a. Can you review the proposal and give me suggestions. I don't feel qualified to drive this process, but I do think it's important to complete it. (I might be wrong about this too... please say so if so.) Nor do I understand why it's difficult to tie up the bow; the underlying infrastructure work is done. Duncan especially: how can we make progress? Do you think it's important to make progress, or are other things in cabal-land more important? My reason for thinking that it's important is that it appears to be the root cause of many people's difficulties with Haskell and Cabal. It might not be a panacea for all ills; but it might be a cheap remedy for a significant proportion of ills. And that would be a Good Thing. Thanks Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon | Peyton Jones | Sent: 23 March 2015 08:46 | To: cabal-devel at haskell.org | Cc: haskell-platform at projects.haskell.org; haskell- | infrastructure at community.galois.com; Haskell Libraries; ghc- | devs at haskell.org | Subject: Cabal and simultaneous installations of the same package | | Dear Cabal developers | | You'll probably have seen the thread about the Haskell Platform. | | Among other things, this point arose: | | | Another thing we should fix is the (now false) impression that HP | | gets in the way of installing other packages and versions due to | cabal hell. | | People mean different things by "cabal hell", but the inability to | simultaneously install multiple versions of the same package, | compiled against different dependencies is certainly one of them, | and I think it is the one that Yitzchak is referring to here. | | But some time now GHC has allowed multiple versions of the same package | (compiled against different dependencies) to be installed | simultaneously. So all we need to do is to fix Cabal to allow it too, | and thereby kill of a huge class of cabal-hell problems at one blow. | | But time has passed and it hasn't happened. Is this because I'm | misunderstanding? Or because it is harder than I think? Or because | there are much bigger problems? Or because there is insufficient | effort available? Or what? | | Unless I'm way off beam, this "multiple installations of the same | package" thing has been a huge pain forever, and the solution is within | our grasp. What's stopping us grasping it? | | Simon | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From malcolm.wallace at me.com Thu May 7 20:41:22 2015 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Thu, 07 May 2015 21:41:22 +0100 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: <87h9ro5fjo.fsf@gnu.org> References: <87zj5i55gs.fsf@gmail.com> <201505061338.16090.jan.stolarek@p.lodz.pl> <87h9ro5fjo.fsf@gnu.org> Message-ID: I also note that in this discussion, so far not a single person has said that the cpphs licence would actually be a problem for them. Regards, Malcolm On 7 May 2015, at 20:54, Herbert Valerio Riedel wrote: > On 2015-05-06 at 13:38:16 +0200, Jan Stolarek wrote: > > [...] > >> Regarding licensing issues: perhaps we should simply ask Malcolm >> Wallace if he would consider changing the license for the sake of GHC? >> Or perhaps he could grant a custom-tailored license to the GHC >> project? After all, the project page [1] says: " If that's a problem >> for you, contact me to make other arrangements." > > Fyi, Neil talked to him[1]: > > | I talked to Malcolm. His contention is that it doesn't actually change > | the license of the ghc package. As such, it's just a single extra > | license to add to a directory full of licenses, which is no big deal. > > > [1]: http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1e5n3 From the.dead.shall.rise at gmail.com Thu May 7 21:19:46 2015 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Thu, 7 May 2015 23:19:46 +0200 Subject: Cabal and simultaneous installations of the same package In-Reply-To: <500eca8d0bb845cea05a03a19a3dffa2@DB4PR30MB030.064d.mgd.msft.net> References: <68326f3ebbd943768effe6b0f2ff522c@DB4PR30MB030.064d.mgd.msft.net> <500eca8d0bb845cea05a03a19a3dffa2@DB4PR30MB030.064d.mgd.msft.net> Message-ID: On 7 May 2015 at 22:34, Simon Peyton Jones wrote: > Dear Cabal developers > > I guess everyone is busy, but I feel a bit stuck on knowing how to make > progress on this thread. Vishal Agrawal's GSoC proposal has been accepted. I guess we'll have to wait and see what comes out of it now. From gershomb at gmail.com Thu May 7 21:28:32 2015 From: gershomb at gmail.com (Gershom B) Date: Thu, 7 May 2015 17:28:32 -0400 Subject: Cabal and simultaneous installations of the same package In-Reply-To: References: <68326f3ebbd943768effe6b0f2ff522c@DB4PR30MB030.064d.mgd.msft.net> <500eca8d0bb845cea05a03a19a3dffa2@DB4PR30MB030.064d.mgd.msft.net> Message-ID: For the info of all, here is the proposal: https://gist.github.com/fugyk/37510958b52589737274 The mentors are Ryan Trinkle and Thomas Tuegel. Hopefully as the project proceeds, Vishal will be checking in with us, asking questions and seeking clarification where necessary, etc. --Gershom On Thu, May 7, 2015 at 5:19 PM, Mikhail Glushenkov < the.dead.shall.rise at gmail.com> wrote: > On 7 May 2015 at 22:34, Simon Peyton Jones wrote: > > Dear Cabal developers > > > > I guess everyone is busy, but I feel a bit stuck on knowing how to make > > progress on this thread. > > Vishal Agrawal's GSoC proposal has been accepted. I guess we'll have > to wait and see what comes out of it now. > _______________________________________________ > cabal-devel mailing list > cabal-devel at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/cabal-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From winterkoninkje at gmail.com Fri May 8 03:45:48 2015 From: winterkoninkje at gmail.com (wren romano) Date: Thu, 7 May 2015 23:45:48 -0400 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> Message-ID: On Wed, May 6, 2015 at 9:05 AM, Alan & Kim Zimmerman wrote: > Perhaps it makes sense to scan hackage to find all the different CPP idioms > that are actually used in Haskell code, if it is a small/well-defined set it > may be worth writing a simple custom preprocessor. Conditional imports are far and away the most commonly used idiom. Second most common, I'd say, is specifying GHC-specific vs compiler-generic implementations of top-level functions (e.g., using GHC.Exts.build or not). For both of these it's sufficient to have the #if construction plus everything needed for the conditional expressions. However, while the #if construction covers the vast majority of use cases, it doesn't cover all of them. Macros are also important. For example, a number of low-level libraries will use macros for things like having assertions which are either compiled as runtime checks, or as nothing, depending on a Cabal flag. Of course, there are plenty of other places where we want to use macros in low-level code, either to force inlining, or to have conditional compilation of (non-top-level) expressions that show up over and over. That these idioms aren't more common is just because there aren't more people working on such low-level code. In theory TH should be able to handle this stuff, but TH is a verbose sledgehammer for these sorts of problems, and using TH means restricting yourself to being GHC-only. -- Live well, ~wren From malcolm.wallace at me.com Fri May 8 06:07:56 2015 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Fri, 08 May 2015 07:07:56 +0100 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> Message-ID: <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> On 8 May 2015, at 00:06, Richard A. O'Keefe wrote: > I think it's important that there be *one* > "cpp" used by Haskell. fpp is under 4 kSLOC > of C, and surely Haskell can do a lot better. FWIW, cpphs is about 1600 LoC today. Regards, Malcolm From malcolm.wallace at me.com Fri May 8 06:10:22 2015 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Fri, 08 May 2015 07:10:22 +0100 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> <201505061338.16090.jan.stolarek@p.lodz.pl> <87h9ro5fjo.fsf@gnu.org> Message-ID: <567D3794-E088-45A1-8687-A877D47F59C1@me.com> Exactly. My post was an attempt to elicit response from anyone to whom it matters. There is no point in worrying about hypothetical licensing problems - let's hear about the real ones. Regards, Malcolm On 7 May 2015, at 22:15, Tomas Carnecky wrote: > That doesn't mean those people don't exist. Maybe they do but are too afraid to speak up (due to corporate policy or whatever). > > On Thu, May 7, 2015 at 10:41 PM, Malcolm Wallace wrote: > I also note that in this discussion, so far not a single person has said that the cpphs licence would actually be a problem for them. > > Regards, > Malcolm > > On 7 May 2015, at 20:54, Herbert Valerio Riedel wrote: > > > On 2015-05-06 at 13:38:16 +0200, Jan Stolarek wrote: > > > > [...] > > > >> Regarding licensing issues: perhaps we should simply ask Malcolm > >> Wallace if he would consider changing the license for the sake of GHC? > >> Or perhaps he could grant a custom-tailored license to the GHC > >> project? After all, the project page [1] says: " If that's a problem > >> for you, contact me to make other arrangements." > > > > Fyi, Neil talked to him[1]: > > > > | I talked to Malcolm. His contention is that it doesn't actually change > > | the license of the ghc package. As such, it's just a single extra > > | license to add to a directory full of licenses, which is no big deal. > > > > > > [1]: http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1e5n3 > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From c.maeder at jacobs-university.de Fri May 8 07:50:46 2015 From: c.maeder at jacobs-university.de (Christian Maeder) Date: Fri, 8 May 2015 09:50:46 +0200 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> Message-ID: <554C6AD6.2090103@jacobs-university.de> Hi, using cpphs is the right way to go! Rewriting it from scratch may be a good exercise but is (essentially) a waste of time. However, always asking Malcolm to get source changes into cpphs would be annoying. Therefore it would be great if the sources were just part of the ghc sources (under git). Another "problem" might be the dependency "polyparse" that is currently not part of the core libraries. (I guess that replacing polyparse by something else would also be a nice exercise.) So (for me) the only question is, if Malcolm is willing to transfer control over cpphs to the haskell-community (or ghc head) - of course with due acknowledgements! Cheers Christian On 08.05.2015 08:07, Malcolm Wallace wrote: > > On 8 May 2015, at 00:06, Richard A. O'Keefe wrote: > >> I think it's important that there be *one* >> "cpp" used by Haskell. fpp is under 4 kSLOC >> of C, and surely Haskell can do a lot better. > > FWIW, cpphs is about 1600 LoC today. > > Regards, > Malcolm > From hvriedel at gmail.com Fri May 8 09:02:09 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 08 May 2015 11:02:09 +0200 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <554C6AD6.2090103@jacobs-university.de> (Christian Maeder's message of "Fri, 8 May 2015 09:50:46 +0200") References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> <554C6AD6.2090103@jacobs-university.de> Message-ID: <87zj5f1lym.fsf@gmail.com> Hello Christian, (I've re-CC'ed haskell-cafe, assuming this wasn't deliberate) On 2015-05-08 at 09:50:46 +0200, Christian Maeder wrote: > using cpphs is the right way to go! > > Rewriting it from scratch may be a good exercise but is (essentially) a > waste of time. > > However, always asking Malcolm to get source changes into cpphs would > be annoying. > > Therefore it would be great if the sources were just part of the ghc > sources (under git). > > Another "problem" might be the dependency "polyparse" that is currently > not part of the core libraries. A scheme was actually discussed privately to address this: We certainly don't want to expose cpphs/polyparse (and text!) as new packages in GHC's global pkg-db. Which we'd end up, if we handled cpphs as the other exposed boot libraries. Therefore we'd only use the few relevant modules from cpphs/polyparse as "other-modules" (i.e. internal hidden dependencies -- i.e. we wouldn't use cpphs/polyparse's .cabal files) compiled into GHC, but not exposed. We'd either create a new Git submodule to hold our "fork" of cpphs/polyparse, or just add it somewhere inside ghc.git > (I guess that replacing polyparse by something else would also be a nice > exercise.) > > So (for me) the only question is, if Malcolm is willing to transfer > control over cpphs to the haskell-community (or ghc head) - of course > with due acknowledgements! I don't think this will be necessary, as we don't need the cpphs-upstream to mirror each modifications immediately. The benefit of the scheme described above is that we'd be somewhat decoupled from cpphs' upstream, and can freely experiment in our "fork", and can sync up with Malcolm from time to time to merge improvements in both directions. -- hvr From metaniklas at gmail.com Fri May 8 09:28:08 2015 From: metaniklas at gmail.com (Niklas Larsson) Date: Fri, 8 May 2015 11:28:08 +0200 Subject: SV: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <87zj5f1lym.fsf@gmail.com> References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> <554C6AD6.2090103@jacobs-university.de> <87zj5f1lym.fsf@gmail.com> Message-ID: <554c81a6.021d980a.468e.72fd@mx.google.com> If the intention is to use cpphs as a library, won't the license affect every program built with the GHC API? That seems to be a high price to pay. ----- Ursprungligt meddelande ----- Fr?n: "Herbert Valerio Riedel" Skickat: ?2015-?05-?08 11:02 Till: "Christian Maeder" Kopia: "Malcolm Wallace" ; "glasgow-haskell-users at haskell.org" ; "libraries at haskell.org" ; "ghc-devs at haskell.org" ; "haskell-cafe" ?mne: Re: [Haskell-cafe] RFC: "Native -XCPP" Proposal Hello Christian, (I've re-CC'ed haskell-cafe, assuming this wasn't deliberate) On 2015-05-08 at 09:50:46 +0200, Christian Maeder wrote: > using cpphs is the right way to go! > > Rewriting it from scratch may be a good exercise but is (essentially) a > waste of time. > > However, always asking Malcolm to get source changes into cpphs would > be annoying. > > Therefore it would be great if the sources were just part of the ghc > sources (under git). > > Another "problem" might be the dependency "polyparse" that is currently > not part of the core libraries. A scheme was actually discussed privately to address this: We certainly don't want to expose cpphs/polyparse (and text!) as new packages in GHC's global pkg-db. Which we'd end up, if we handled cpphs as the other exposed boot libraries. Therefore we'd only use the few relevant modules from cpphs/polyparse as "other-modules" (i.e. internal hidden dependencies -- i.e. we wouldn't use cpphs/polyparse's .cabal files) compiled into GHC, but not exposed. We'd either create a new Git submodule to hold our "fork" of cpphs/polyparse, or just add it somewhere inside ghc.git > (I guess that replacing polyparse by something else would also be a nice > exercise.) > > So (for me) the only question is, if Malcolm is willing to transfer > control over cpphs to the haskell-community (or ghc head) - of course > with due acknowledgements! I don't think this will be necessary, as we don't need the cpphs-upstream to mirror each modifications immediately. The benefit of the scheme described above is that we'd be somewhat decoupled from cpphs' upstream, and can freely experiment in our "fork", and can sync up with Malcolm from time to time to merge improvements in both directions. -- hvr _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Fri May 8 09:39:24 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 08 May 2015 11:39:24 +0200 Subject: SV: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <554c81a6.021d980a.468e.72fd@mx.google.com> (Niklas Larsson's message of "Fri, 8 May 2015 11:28:08 +0200") References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> <554C6AD6.2090103@jacobs-university.de> <87zj5f1lym.fsf@gmail.com> <554c81a6.021d980a.468e.72fd@mx.google.com> Message-ID: <87vbg31k8j.fsf@gmail.com> Hello, On 2015-05-08 at 11:28:08 +0200, Niklas Larsson wrote: > If the intention is to use cpphs as a library, won't the license > affect every program built with the GHC API? That seems to be a high > price to pay. Yes, every program linking the `ghc` package would be affected by LGPL+SLE albeit in a contained way, as it's mentioned on the Wiki page: | - As a practical consequence of the //LGPL with static-linking-exception// | (LGPL+SLE), **if no modifications are made to the `cpphs`-parts** | (i.e. the LGPL+SLE covered modules) of the GHC code-base, | **then there is no requirement to ship (or make available) any source code** | together with the binaries, even if other parts of the GHC code-base | were modified. However, don't forget we already have this issue w/ integer-gmp, and with that the LGPL is in full effect (i.e. w/o a static-linkage-exception!) In that context, the suggestion was made[1] to handle the cpphs-code like the GMP code, i.e. allow a compile-time configuration in the GHC build-system to build a cpphs-free (and/or GMP-free) GHC for those parties that need to avoid any LGPL-ish code whatsoever in their toolchain. Would that address this concern? [1]: http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1cdhb From mboes at tweag.net Fri May 8 10:10:33 2015 From: mboes at tweag.net (Mathieu Boespflug) Date: Fri, 8 May 2015 12:10:33 +0200 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <87vbg31k8j.fsf@gmail.com> References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> <554C6AD6.2090103@jacobs-university.de> <87zj5f1lym.fsf@gmail.com> <554c81a6.021d980a.468e.72fd@mx.google.com> <87vbg31k8j.fsf@gmail.com> Message-ID: [Gah, wrong From: email address given the list subscriptions, sorry for the duplicates.] I'm unclear why cpphs needs to be made a dependency of the GHC API and included as a lib. Could you elaborate? (in the wiki page possibly) Currently, GHC uses the system preprocessor, as a separate process. Couldn't we for GHC 7.12 keep to exactly that, save for the fact that by default GHC would call the cpphs binary for preprocessing, and have the cpphs binary be available in GHC's install dir somewhere? fork()/execvce() is cheap. Certainly cheaper than the cost of compiling a single Haskell module. Can't we keep to this separate-(and-pluggable)-preprocessor-executable scheme? We'd sidestep most license tainting concerns that way. On 8 May 2015 at 11:39, Herbert Valerio Riedel wrote: > Hello, > > On 2015-05-08 at 11:28:08 +0200, Niklas Larsson wrote: >> If the intention is to use cpphs as a library, won't the license >> affect every program built with the GHC API? That seems to be a high >> price to pay. > > Yes, every program linking the `ghc` package would be affected by > LGPL+SLE albeit in a contained way, as it's mentioned on the Wiki page: > > | - As a practical consequence of the //LGPL with static-linking-exception// > | (LGPL+SLE), **if no modifications are made to the `cpphs`-parts** > | (i.e. the LGPL+SLE covered modules) of the GHC code-base, > | **then there is no requirement to ship (or make available) any source code** > | together with the binaries, even if other parts of the GHC code-base > | were modified. > > However, don't forget we already have this issue w/ integer-gmp, and > with that the LGPL is in full effect (i.e. w/o a static-linkage-exception!) > > In that context, the suggestion was made[1] to handle the cpphs-code > like the GMP code, i.e. allow a compile-time configuration in the GHC > build-system to build a cpphs-free (and/or GMP-free) GHC for those > parties that need to avoid any LGPL-ish code whatsoever in their > toolchain. > > Would that address this concern? > > > [1]: http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1cdhb > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From carter.schonwald at gmail.com Fri May 8 12:07:07 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 8 May 2015 08:07:07 -0400 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> <554C6AD6.2090103@jacobs-university.de> <87zj5f1lym.fsf@gmail.com> <554c81a6.021d980a.468e.72fd@mx.google.com> <87vbg31k8j.fsf@gmail.com> Message-ID: Indeed. This is also how we use gcc and the llvm tooling. If we want the cpp tooling to be available as a library, that's a whole nother set of needs. Gmp lgpl I can brush under the rug at work because there's the various integer simple options, this gets a bit more squirrelly otherwise. Maybe it'd be simpler for two people to sit down for a weekend, one only narrating the cpphs code, the other only listening and paraphrasing it into a new program. Copyright on text only covers literal copying. Nontrivial rephrasing of everything plus some rejiggering of non local structure is not prevented by copyright law, and I doubt there are any patents in play. On Friday, May 8, 2015, Mathieu Boespflug wrote: > [Gah, wrong From: email address given the list subscriptions, sorry > for the duplicates.] > > I'm unclear why cpphs needs to be made a dependency of the GHC API and > included as a lib. Could you elaborate? (in the wiki page possibly) > > Currently, GHC uses the system preprocessor, as a separate process. > Couldn't we for GHC 7.12 keep to exactly that, save for the fact that > by default GHC would call the cpphs binary for preprocessing, and have > the cpphs binary be available in GHC's install dir somewhere? > > fork()/execvce() is cheap. Certainly cheaper than the cost of > compiling a single Haskell module. Can't we keep to this > separate-(and-pluggable)-preprocessor-executable scheme? We'd sidestep > most license tainting concerns that way. > > > On 8 May 2015 at 11:39, Herbert Valerio Riedel > wrote: > > Hello, > > > > On 2015-05-08 at 11:28:08 +0200, Niklas Larsson wrote: > >> If the intention is to use cpphs as a library, won't the license > >> affect every program built with the GHC API? That seems to be a high > >> price to pay. > > > > Yes, every program linking the `ghc` package would be affected by > > LGPL+SLE albeit in a contained way, as it's mentioned on the Wiki page: > > > > | - As a practical consequence of the //LGPL with > static-linking-exception// > > | (LGPL+SLE), **if no modifications are made to the `cpphs`-parts** > > | (i.e. the LGPL+SLE covered modules) of the GHC code-base, > > | **then there is no requirement to ship (or make available) any > source code** > > | together with the binaries, even if other parts of the GHC code-base > > | were modified. > > > > However, don't forget we already have this issue w/ integer-gmp, and > > with that the LGPL is in full effect (i.e. w/o a > static-linkage-exception!) > > > > In that context, the suggestion was made[1] to handle the cpphs-code > > like the GMP code, i.e. allow a compile-time configuration in the GHC > > build-system to build a cpphs-free (and/or GMP-free) GHC for those > > parties that need to avoid any LGPL-ish code whatsoever in their > > toolchain. > > > > Would that address this concern? > > > > > > [1]: > http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1cdhb > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri May 8 15:41:11 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 8 May 2015 11:41:11 -0400 Subject: RFC: "Native -XCPP" Proposal In-Reply-To: References: <87a8xhvh48.fsf@feelingofgreen.ru> <895651832.1251542.1430931188127.JavaMail.yahoo@mail.yahoo.com> <87sib896d3.fsf@gnu.org> <8860F6E7-BF3D-4D6F-8506-7154DA264AF6@cs.otago.ac.nz> <649116C5-21A5-4351-9CF5-FDB6BA178EB4@me.com> <554C6AD6.2090103@jacobs-university.de> <87zj5f1lym.fsf@gmail.com> <554c81a6.021d980a.468e.72fd@mx.google.com> <87vbg31k8j.fsf@gmail.com> Message-ID: To clarify my point more concretely: Adding cpphs the cli tool to ghc build process / bin dist has no more licensing implications than does using gcc as a compiler / assembler. Ie NONE Using cpphs as a library is Another discussion. But I don't think it's the one we're having today. On Friday, May 8, 2015, Carter Schonwald wrote: > Indeed. This is also how we use gcc and the llvm tooling. > > If we want the cpp tooling to be available as a library, that's a whole > nother set of needs. > > Gmp lgpl I can brush under the rug at work because there's the various > integer simple options, this gets a bit more squirrelly otherwise. > > Maybe it'd be simpler for two people to sit down for a weekend, one only > narrating the cpphs code, the other only listening and paraphrasing it into > a new program. Copyright on text only covers literal copying. Nontrivial > rephrasing of everything plus some rejiggering of non local structure is > not prevented by copyright law, and I doubt there are any patents in play. > > > > On Friday, May 8, 2015, Mathieu Boespflug > wrote: > >> [Gah, wrong From: email address given the list subscriptions, sorry >> for the duplicates.] >> >> I'm unclear why cpphs needs to be made a dependency of the GHC API and >> included as a lib. Could you elaborate? (in the wiki page possibly) >> >> Currently, GHC uses the system preprocessor, as a separate process. >> Couldn't we for GHC 7.12 keep to exactly that, save for the fact that >> by default GHC would call the cpphs binary for preprocessing, and have >> the cpphs binary be available in GHC's install dir somewhere? >> >> fork()/execvce() is cheap. Certainly cheaper than the cost of >> compiling a single Haskell module. Can't we keep to this >> separate-(and-pluggable)-preprocessor-executable scheme? We'd sidestep >> most license tainting concerns that way. >> >> >> On 8 May 2015 at 11:39, Herbert Valerio Riedel >> wrote: >> > Hello, >> > >> > On 2015-05-08 at 11:28:08 +0200, Niklas Larsson wrote: >> >> If the intention is to use cpphs as a library, won't the license >> >> affect every program built with the GHC API? That seems to be a high >> >> price to pay. >> > >> > Yes, every program linking the `ghc` package would be affected by >> > LGPL+SLE albeit in a contained way, as it's mentioned on the Wiki page: >> > >> > | - As a practical consequence of the //LGPL with >> static-linking-exception// >> > | (LGPL+SLE), **if no modifications are made to the `cpphs`-parts** >> > | (i.e. the LGPL+SLE covered modules) of the GHC code-base, >> > | **then there is no requirement to ship (or make available) any >> source code** >> > | together with the binaries, even if other parts of the GHC code-base >> > | were modified. >> > >> > However, don't forget we already have this issue w/ integer-gmp, and >> > with that the LGPL is in full effect (i.e. w/o a >> static-linkage-exception!) >> > >> > In that context, the suggestion was made[1] to handle the cpphs-code >> > like the GMP code, i.e. allow a compile-time configuration in the GHC >> > build-system to build a cpphs-free (and/or GMP-free) GHC for those >> > parties that need to avoid any LGPL-ish code whatsoever in their >> > toolchain. >> > >> > Would that address this concern? >> > >> > >> > [1]: >> http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1cdhb >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.doel at gmail.com Fri May 8 16:12:28 2015 From: dan.doel at gmail.com (Dan Doel) Date: Fri, 8 May 2015 12:12:28 -0400 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> Message-ID: vector generates a considerable amount of code using CPP macros. And with regard to other mails, I'm not too eager (personally) to port that to template Haskell, even though I'm no fan of CPP. The code generation being done is so dumb that CPP is pretty much perfect for it, and TH would probably just be more work (and it's certainly more work to write it again now that it's already written). On Wed, May 6, 2015 at 10:21 AM, Bardur Arantsson wrote: > On 06-05-2015 15:05, Alan & Kim Zimmerman wrote: > > Perhaps it makes sense to scan hackage to find all the different CPP > idioms > > that are actually used in Haskell code, if it is a small/well-defined set > > it may be worth writing a simple custom preprocessor. > > > > +1, I'll wager that the vast majority of usages are just for version > range checks. > > If there are packages that require more, they could just keep using the > system-cpp or, I, guess cpphs if it gets baked into GHC. Like you, I'd > want to see real evidence that that's actually worth the > effort/complication. > > Regards, > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From winterkoninkje at gmail.com Fri May 8 22:20:25 2015 From: winterkoninkje at gmail.com (wren romano) Date: Fri, 8 May 2015 18:20:25 -0400 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> Message-ID: On Fri, May 8, 2015 at 12:12 PM, Dan Doel wrote: > vector generates a considerable amount of code using CPP macros. > > And with regard to other mails, I'm not too eager (personally) to port that > to template Haskell, even though I'm no fan of CPP. The code generation > being done is so dumb that CPP is pretty much perfect for it, and TH would > probably just be more work (and it's certainly more work to write it again > now that it's already written). Incidentally, if we really want to pursue the "get rid of CPP by building it into the GHC distro"... In recent years there've been a number of papers on "variational lambda-calculi"[1] which essentially serve to embed flag-based preprocessor conditionals directly into the language itself. One major benefit of this approach is that the compiler can then typecheck *all* variations of the code, rather than only checking whichever particular variation we happen to be compiling at the time. This is extremely useful for avoiding bitrot in the preprocessor conditionals. ...If we were to try and obviate the dependency on CPP, variational typing seems like a far more solid approach than simply reinventing the preprocessing wheel yet again. (The downside, of course, is making the Haskell spec significantly more complex.) [1] e.g., -- Live well, ~wren From allbery.b at gmail.com Fri May 8 23:56:41 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 8 May 2015 19:56:41 -0400 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: <20150508234028.C7ADC276CE1@mail.avvanta.com> References: <87zj5i55gs.fsf@gmail.com> <20150508234028.C7ADC276CE1@mail.avvanta.com> Message-ID: On Fri, May 8, 2015 at 7:40 PM, Donn Cave wrote: > But fatal if compilation is conditional on something that affects the > ability to type check, am I right? Such as different compilers or > versions of same compiler. > Not per the abstract (paper itself seems to be paywalled). They had an earlier work with that issue, the linked one is about how to be robust in the face of such conditionals. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat May 9 14:05:09 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Sat, 9 May 2015 10:05:09 -0400 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> <20150508234028.C7ADC276CE1@mail.avvanta.com> Message-ID: On Fri, May 8, 2015 at 7:56 PM, Brandon Allbery wrote: > On Fri, May 8, 2015 at 7:40 PM, Donn Cave wrote: > >> But fatal if compilation is conditional on something that affects the >> ability to type check, am I right? Such as different compilers or >> versions of same compiler. >> > > Not per the abstract (paper itself seems to be paywalled). They had an > earlier work with that issue, the linked one is about how to be robust in > the face of such conditionals. > There's also the question about handling changes in syntax, e.g. LambdaCase throws parse errors in older compilers. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.trstenjak at gmail.com Mon May 11 08:31:03 2015 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Mon, 11 May 2015 10:31:03 +0200 Subject: [Haskell-cafe] RFC: "Native -XCPP" Proposal In-Reply-To: References: <87zj5i55gs.fsf@gmail.com> Message-ID: <20150511083103.GA4434@machine> Hi Wren, > Incidentally, if we really want to pursue the "get rid of CPP by > building it into the GHC distro"... > > In recent years there've been a number of papers on "variational > lambda-calculi"[1] which essentially serve to embed flag-based > preprocessor conditionals directly into the language itself. One major > benefit of this approach is that the compiler can then typecheck *all* > variations of the code, rather than only checking whichever particular > variation we happen to be compiling at the time. This is extremely > useful for avoiding bitrot in the preprocessor conditionals. > > ...If we were to try and obviate the dependency on CPP, variational > typing seems like a far more solid approach than simply reinventing > the preprocessing wheel yet again. (The downside, of course, is making > the Haskell spec significantly more complex.) I think even more beneficial than type checking all cases is the easier support for any Haskell tooling operating with the Haskell source if all cases are part of the AST. Greetings, Daniel From marlowsd at gmail.com Mon May 11 19:15:59 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 11 May 2015 20:15:59 +0100 Subject: mapM /= traverse? Message-ID: <5550FFEF.1000806@gmail.com> I was hoping that in GHC 7.10 we would make mapM = traverse for lists, but it appears this isn't the case: the Traversable instance for lists overrides mapM to be the manually-defined version in terms of foldr. Why is this? Fusion? Unfortunately since I want mapM = traverse (for Haxl) I'll need to continue to redefine it in our custom Prelude. Cheers, Simon From dan.doel at gmail.com Mon May 11 21:41:01 2015 From: dan.doel at gmail.com (Dan Doel) Date: Mon, 11 May 2015 17:41:01 -0400 Subject: mapM /= traverse? In-Reply-To: <5550FFEF.1000806@gmail.com> References: <5550FFEF.1000806@gmail.com> Message-ID: The reason I know of why mapM wasn't just made to be an alias for traverse (assuming that's what you mean) was that it was thought that particular definitions of mapM could be more efficient than traverse. For instance: mapM :: Monad m => (a -> m b) -> [a] -> m [b] mapM f = go [] where go ys [] = return (reverse ys) go ys (x:xs) = f x >>= \y -> go (y:ys) xs This doesn't use stack for m = IO, for instance. However, it has since been pointed out (to me and Ed, at least), that this matters much less now. Stack overflows are now off by default, and if you measure the overall time and memory usage, traverse compares favorably to this custom mapM. So, as long as stack isn't an artificially scarce resource, there's no reason to keep them distinct. We didn't know this until after 7.10, though. If you're just asking why the definition of 'mapM' for lists isn't 'traverse' with a more specific type, I don't know the answer to that. -- Dan On Mon, May 11, 2015 at 3:15 PM, Simon Marlow wrote: > I was hoping that in GHC 7.10 we would make mapM = traverse for lists, but > it appears this isn't the case: the Traversable instance for lists > overrides mapM to be the manually-defined version in terms of foldr. > > Why is this? Fusion? > > Unfortunately since I want mapM = traverse (for Haxl) I'll need to > continue to redefine it in our custom Prelude. > > Cheers, > Simon > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ky3 at atamo.com Tue May 12 03:33:15 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Tue, 12 May 2015 10:33:15 +0700 Subject: mapM /= traverse? In-Reply-To: <5550FFEF.1000806@gmail.com> References: <5550FFEF.1000806@gmail.com> Message-ID: On Tue, May 12, 2015 at 2:15 AM, Simon Marlow wrote: > Unfortunately since I want mapM = traverse (for Haxl) I'll need to > continue to redefine it in our custom Prelude. Apologies if I'm missing context, but what about replacing mapM with traverse in the source code? What problems do the additional polymorphism create? -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue May 12 05:05:14 2015 From: david.feuer at gmail.com (David Feuer) Date: Tue, 12 May 2015 01:05:14 -0400 Subject: Proposal: Generalize forever to Applicative Message-ID: This looks like a no-brainer to me: forever :: Applicative f => f a -> f b forever a = let x = a *> x in x -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue May 12 07:28:20 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 12 May 2015 03:28:20 -0400 Subject: Proposal: Generalize forever to Applicative In-Reply-To: References: Message-ID: +1. This is definitely on the list of things we should generalize. -Edward On Tue, May 12, 2015 at 1:05 AM, David Feuer wrote: > This looks like a no-brainer to me: > > forever :: Applicative f => f a -> f b > forever a = let x = a *> x in x > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Tue May 12 07:44:19 2015 From: michael at snoyman.com (Michael Snoyman) Date: Tue, 12 May 2015 07:44:19 +0000 Subject: Proposal: Generalize forever to Applicative In-Reply-To: References: Message-ID: +1 On Tue, May 12, 2015 at 8:05 AM David Feuer wrote: > This looks like a no-brainer to me: > > forever :: Applicative f => f a -> f b > forever a = let x = a *> x in x > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue May 12 07:58:52 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 12 May 2015 08:58:52 +0100 Subject: mapM /= traverse? In-Reply-To: References: <5550FFEF.1000806@gmail.com> Message-ID: <5551B2BC.9050806@gmail.com> On 11/05/2015 22:41, Dan Doel wrote: > The reason I know of why mapM wasn't just made to be an alias for > traverse (assuming that's what you mean) was that it was thought that > particular definitions of mapM could be more efficient than traverse. > For instance: > > mapM :: Monad m => (a -> m b) -> [a] -> m [b] > mapM f = go [] > where > go ys [] = return (reverse ys) > go ys (x:xs) = f x >>= \y -> go (y:ys) xs > > This doesn't use stack for m = IO, for instance. > > However, it has since been pointed out (to me and Ed, at least), that > this matters much less now. Stack overflows are now off by default, and > if you measure the overall time and memory usage, traverse compares > favorably to this custom mapM. So, as long as stack isn't an > artificially scarce resource, there's no reason to keep them distinct. > We didn't know this until after 7.10, though. > > If you're just asking why the definition of 'mapM' for lists isn't > 'traverse' with a more specific type, I don't know the answer to that. Yes, I'm not really concerned that mapM is a method of Traversable rather than just being an alias for traverse, but I'm wondering why we define it in the list instance rather than using the default. Cheers, Simon > -- Dan > > > On Mon, May 11, 2015 at 3:15 PM, Simon Marlow > wrote: > > I was hoping that in GHC 7.10 we would make mapM = traverse for > lists, but it appears this isn't the case: the Traversable instance > for lists overrides mapM to be the manually-defined version in terms > of foldr. > > Why is this? Fusion? > > Unfortunately since I want mapM = traverse (for Haxl) I'll need to > continue to redefine it in our custom Prelude. > > Cheers, > Simon > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > From marlowsd at gmail.com Tue May 12 08:04:19 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 12 May 2015 09:04:19 +0100 Subject: mapM /= traverse? In-Reply-To: References: <5550FFEF.1000806@gmail.com> Message-ID: <5551B403.2090509@gmail.com> On 12/05/2015 04:33, Kim-Ee Yeoh wrote: > > On Tue, May 12, 2015 at 2:15 AM, Simon Marlow > wrote: > > Unfortunately since I want mapM = traverse (for Haxl) I'll need to > continue to redefine it in our custom Prelude. > > > Apologies if I'm missing context, but what about replacing mapM with > traverse in the source code? > > What problems do the additional polymorphism create? We could do that, but this is a DSL we provide to users who are in most cases not native Haskell programmers and the idea is to keep things as simple as possible. So we wanted to standardise on either mapM or traverse, and since mapM is more familiar (and appears in books etc.) we went with mapM. I'd also been assuming that the issue would go away in 7.10 because mapM would be equivalent to traverse. Cheers, Simon From byorgey at gmail.com Tue May 12 13:54:25 2015 From: byorgey at gmail.com (Brent Yorgey) Date: Tue, 12 May 2015 13:54:25 +0000 Subject: Proposal: Generalize forever to Applicative In-Reply-To: References: Message-ID: +1. On Tue, May 12, 2015 at 3:44 AM Michael Snoyman wrote: > +1 > > > On Tue, May 12, 2015 at 8:05 AM David Feuer wrote: > >> This looks like a no-brainer to me: >> >> forever :: Applicative f => f a -> f b >> forever a = let x = a *> x in x >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alois.cochard at gmail.com Tue May 12 14:06:26 2015 From: alois.cochard at gmail.com (Alois Cochard) Date: Tue, 12 May 2015 16:06:26 +0200 Subject: Proposal: Generalize forever to Applicative In-Reply-To: References: Message-ID: +1 On 12 May 2015 at 07:05, David Feuer wrote: > This looks like a no-brainer to me: > > forever :: Applicative f => f a -> f b > forever a = let x = a *> x in x > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -- *?\ois* http://twitter.com/aloiscochard http://github.com/aloiscochard -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.doel at gmail.com Tue May 12 14:18:23 2015 From: dan.doel at gmail.com (Dan Doel) Date: Tue, 12 May 2015 10:18:23 -0400 Subject: mapM /= traverse? In-Reply-To: <5551B2BC.9050806@gmail.com> References: <5550FFEF.1000806@gmail.com> <5551B2BC.9050806@gmail.com> Message-ID: Okay. I talked with some folks, and I now understand why this matters for you. I can't think of a fusion reason for the custom definition. traverse is a foldr same as the particular mapM. I think it's just an oversight. Since the type doesn't even change, we should be able to fix this in 7.10.2, no? On Tue, May 12, 2015 at 3:58 AM, Simon Marlow wrote: > On 11/05/2015 22:41, Dan Doel wrote: > >> The reason I know of why mapM wasn't just made to be an alias for >> traverse (assuming that's what you mean) was that it was thought that >> particular definitions of mapM could be more efficient than traverse. >> For instance: >> >> mapM :: Monad m => (a -> m b) -> [a] -> m [b] >> mapM f = go [] >> where >> go ys [] = return (reverse ys) >> go ys (x:xs) = f x >>= \y -> go (y:ys) xs >> >> This doesn't use stack for m = IO, for instance. >> >> However, it has since been pointed out (to me and Ed, at least), that >> this matters much less now. Stack overflows are now off by default, and >> if you measure the overall time and memory usage, traverse compares >> favorably to this custom mapM. So, as long as stack isn't an >> artificially scarce resource, there's no reason to keep them distinct. >> We didn't know this until after 7.10, though. >> >> If you're just asking why the definition of 'mapM' for lists isn't >> 'traverse' with a more specific type, I don't know the answer to that. >> > > Yes, I'm not really concerned that mapM is a method of Traversable rather > than just being an alias for traverse, but I'm wondering why we define it > in the list instance rather than using the default. > > Cheers, > Simon > > > -- Dan >> >> >> On Mon, May 11, 2015 at 3:15 PM, Simon Marlow > > wrote: >> >> I was hoping that in GHC 7.10 we would make mapM = traverse for >> lists, but it appears this isn't the case: the Traversable instance >> for lists overrides mapM to be the manually-defined version in terms >> of foldr. >> >> Why is this? Fusion? >> >> Unfortunately since I want mapM = traverse (for Haxl) I'll need to >> continue to redefine it in our custom Prelude. >> >> Cheers, >> Simon >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue May 12 14:26:42 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 12 May 2015 10:26:42 -0400 Subject: mapM /= traverse? In-Reply-To: <5551B2BC.9050806@gmail.com> References: <5550FFEF.1000806@gmail.com> <5551B2BC.9050806@gmail.com> Message-ID: On Tue, May 12, 2015 at 3:58 AM, Simon Marlow wrote: > > Yes, I'm not really concerned that mapM is a method of Traversable rather > than just being an alias for traverse, but I'm wondering why we define it > in the list instance rather than using the default. > We were pretty paranoid about introducing space or time regressions and didn't have a proof that we wouldn't introduce them by changing something there, so we left it alone. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnw at newartisans.com Tue May 12 15:13:41 2015 From: johnw at newartisans.com (John Wiegley) Date: Tue, 12 May 2015 10:13:41 -0500 Subject: Proposal: Generalize forever to Applicative In-Reply-To: (David Feuer's message of "Tue, 12 May 2015 01:05:14 -0400") References: Message-ID: >>>>> David Feuer writes: > forever :: Applicative f => f a -> f b > forever a = let x = a *> x in x +1 John From dluposchainsky at googlemail.com Tue May 12 15:16:04 2015 From: dluposchainsky at googlemail.com (David Luposchainsky) Date: Tue, 12 May 2015 17:16:04 +0200 Subject: Proposal: Generalize forever to Applicative In-Reply-To: References: Message-ID: <55521934.3010703@gmail.com> >>>>>> David Feuer writes: > forever :: Applicative f => f a -> f b > forever a = let x = a *> x in x +1 David From marlowsd at gmail.com Tue May 12 15:58:57 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 12 May 2015 16:58:57 +0100 Subject: mapM /= traverse? In-Reply-To: References: <5550FFEF.1000806@gmail.com> <5551B2BC.9050806@gmail.com> Message-ID: <55522341.1000000@gmail.com> On 12/05/2015 15:26, Edward Kmett wrote: > On Tue, May 12, 2015 at 3:58 AM, Simon Marlow > wrote: > > > Yes, I'm not really concerned that mapM is a method of Traversable > rather than just being an alias for traverse, but I'm wondering why > we define it in the list instance rather than using the default. > > > We were pretty paranoid about introducing space or time regressions and > didn't have a proof that we wouldn't introduce them by changing > something there, so we left it alone. Ok good, so it looks like the answer is "we could change it, but benchmark first". I can do that. Thanks! Cheers, Simon From ekmett at gmail.com Tue May 12 16:10:31 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 12 May 2015 12:10:31 -0400 Subject: mapM /= traverse? In-Reply-To: <55522341.1000000@gmail.com> References: <5550FFEF.1000806@gmail.com> <5551B2BC.9050806@gmail.com> <55522341.1000000@gmail.com> Message-ID: We managed to show that the old counter-example of a mapM that works where traverse that would blow the stack isn't an issue since https://ghc.haskell.org/trac/ghc/ticket/8189 was resolved. Consequently, we're looking at removing mapM entirely from the class and just making it a top level definition. To do that we'd need to deprecate redefinition of the method in instances for a version or two. This would need a new form of deprecation, where you deprecate redefinition but not use of a member of a class. (I think Herbert filed an issue to create it, but I can't find it off hand.) Once we can make that transition, then the constraints on mapM would relax to the same as those for traverse. That would fix both the constraints and the implementation going forward for everything, but we should probably handle this particular case first or you won't see any benefit for a couple of years. -Edward On Tue, May 12, 2015 at 11:58 AM, Simon Marlow wrote: > On 12/05/2015 15:26, Edward Kmett wrote: > >> On Tue, May 12, 2015 at 3:58 AM, Simon Marlow > > wrote: >> >> >> Yes, I'm not really concerned that mapM is a method of Traversable >> rather than just being an alias for traverse, but I'm wondering why >> we define it in the list instance rather than using the default. >> >> >> We were pretty paranoid about introducing space or time regressions and >> didn't have a proof that we wouldn't introduce them by changing >> something there, so we left it alone. >> > > Ok good, so it looks like the answer is "we could change it, but benchmark > first". I can do that. Thanks! > > Cheers, > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danburton.email at gmail.com Tue May 12 18:07:06 2015 From: danburton.email at gmail.com (Dan Burton) Date: Tue, 12 May 2015 11:07:06 -0700 Subject: Proposal: Generalize forever to Applicative In-Reply-To: References: Message-ID: +1 On Monday, May 11, 2015, David Feuer wrote: > This looks like a no-brainer to me: > > forever :: Applicative f => f a -> f b > forever a = let x = a *> x in x > -- -- Dan Burton -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Tue May 12 19:05:32 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Tue, 12 May 2015 21:05:32 +0200 (CEST) Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: On Tue, 5 May 2015, Greg Weber wrote: > The issue you have not raised that I am more concerned with is that NonEmpty only works for lists. > Michael and I figured out how to extend the concept to any (Mono)Foldable structure and also to be able to > demand lengths of > 1. It's possible with the structure provided by "non-empty". It is also possible to construct types for lists of fixed length or lists with an arbitrary set of allowed lengths. From hvriedel at gmail.com Tue May 12 19:34:34 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 12 May 2015 21:34:34 +0200 Subject: mapM /= traverse? In-Reply-To: (Edward Kmett's message of "Tue, 12 May 2015 12:10:31 -0400") References: <5550FFEF.1000806@gmail.com> <5551B2BC.9050806@gmail.com> <55522341.1000000@gmail.com> Message-ID: <874mnhoaid.fsf@gmail.com> On 2015-05-12 at 18:10:31 +0200, Edward Kmett wrote: [...] > Consequently, we're looking at removing mapM entirely from the class and > just making it a top level definition. > > To do that we'd need to deprecate redefinition of the method in instances > for a version or two. This would need a new form of deprecation, where you > deprecate redefinition but not use of a member of a class. (I think Herbert > filed an issue to create it, but I can't find it off hand.) https://ghc.haskell.org/trac/ghc/ticket/10071 [...] Cheers, hvr From ekmett at gmail.com Tue May 12 22:33:43 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 12 May 2015 18:33:43 -0400 Subject: Proposal: Make Semigroup as a superclass of Monoid In-Reply-To: References: <1427631633145-5767835.post@n5.nabble.com> Message-ID: For folding containers of size >=1 with a semigroup I usually turn to classes from the semigroupoids package: http://hackage.haskell.org/package/semigroupoids-4.3/docs/Data-Semigroup-Foldable.html http://hackage.haskell.org/package/semigroupoids-4.3/docs/Data-Semigroup-Traversable.html They are placed there, rather than in something like semigroups, because the 'return-less' variants of Monad and Applicative they need to provide most operations are placed in that package. -Edward On Tue, May 12, 2015 at 3:05 PM, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > On Tue, 5 May 2015, Greg Weber wrote: > > The issue you have not raised that I am more concerned with is that >> NonEmpty only works for lists. >> Michael and I figured out how to extend the concept to any (Mono)Foldable >> structure and also to be able to >> demand lengths of > 1. >> > > It's possible with the structure provided by "non-empty". It is also > possible to construct types for lists of fixed length or lists with an > arbitrary set of allowed lengths. > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ganesh at earth.li Wed May 13 06:11:34 2015 From: ganesh at earth.li (Ganesh Sittampalam) Date: Wed, 13 May 2015 07:11:34 +0100 Subject: Proposal: liftData for Template Haskell In-Reply-To: <1430796774-sup-3968@sabre> References: <1429269330-sup-7487@sabre> <1430796774-sup-3968@sabre> Message-ID: <5552EB16.5010606@earth.li> -1: I think that kind of instance (Foo a => Bar a) is generally quite problematic so there should be a pretty strong case to support it. There can only be one of them - if some other class shows up that can also provide an equally good implementation of 'lift', there's a conflict. Also won't people get misleading error messages, implying they should implement Data when Lift would do? "instance Lift X where lift = liftData" doesn't seem too onerous to write by hand to me, though I guess it may be hard to discover that's an option. On 05/05/2015 04:36, Edward Z. Yang wrote: > Hello all, > > It looks like people are opposed to doing with the lift type-class. > So here is a counterproposal: mark the Lift type class as overlappable, > and define an instance: > > instance Data a => Lift a where > ... > > This is fairly desirable, since GHC will sometimes generate a call to > 'lift', in which case liftData can't be manually filled in. People > can still define efficient versions of lift. > > Edward > > Excerpts from Edward Z. Yang's message of 2015-04-17 04:21:16 -0700: >> I propose adding the following function to Language.Haskell.TH: >> >> -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which >> -- works for any type with a 'Data' instance. >> liftData :: Data a => a -> Q Exp >> liftData = dataToExpQ (const Nothing) >> >> I don't really know which submodule this should come from; >> since it uses 'dataToExpQ', you might put it in Language.Haskell.TH.Quote >> but arguably 'dataToExpQ' doesn't belong in this module either, >> and it only lives there because it is a useful function for defining >> quasiquoters and it was described in the quasiquoting paper. >> >> I might propose getting rid of the 'Lift' class entirely, but you >> might prefer that class since it doesn't go through SYB (and have >> the attendant slowdown). >> >> This mode of use of 'dataToExpQ' deserves more attention. >> >> Discussion period: 1 month >> >> Cheers, >> Edward > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From gershomb at gmail.com Wed May 13 06:28:53 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 13 May 2015 02:28:53 -0400 Subject: Proposal: liftData for Template Haskell In-Reply-To: <5552EB16.5010606@earth.li> References: <1429269330-sup-7487@sabre> <1430796774-sup-3968@sabre> <5552EB16.5010606@earth.li> Message-ID: On May 13, 2015 at 2:15:26 AM, Ganesh Sittampalam (ganesh at earth.li) wrote: > -1: I think that kind of instance (Foo a => Bar a) is generally quite > problematic so there should be a pretty strong case to support it. > > There can only be one of them - if some other class shows up that can > also provide an equally good implementation of 'lift', there's a conflict. > > Also won't people get misleading error messages, implying they should > implement Data when Lift would do? > > "instance Lift X where lift = liftData" doesn't seem too onerous to > write by hand to me, though I guess it may be hard to discover that's an > option. Isn?t this a case where -XDefaultSignatures and -XDeriveAnyClass can make things at least a bit nicer? Of course we still need to document well how to use them, but such instrumentation of the Lift class would at least make it clear that it is ?intended? to be used in such a fashion :-) -g > On 05/05/2015 04:36, Edward Z. Yang wrote: > > Hello all, > > > > It looks like people are opposed to doing with the lift type-class. > > So here is a counterproposal: mark the Lift type class as overlappable, > > and define an instance: > > > > instance Data a => Lift a where > > ... > > > > This is fairly desirable, since GHC will sometimes generate a call to > > 'lift', in which case liftData can't be manually filled in. People > > can still define efficient versions of lift. > > > > Edward > > > > Excerpts from Edward Z. Yang's message of 2015-04-17 04:21:16 -0700: > >> I propose adding the following function to Language.Haskell.TH: > >> > >> -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which > >> -- works for any type with a 'Data' instance. > >> liftData :: Data a => a -> Q Exp > >> liftData = dataToExpQ (const Nothing) > >> > >> I don't really know which submodule this should come from; > >> since it uses 'dataToExpQ', you might put it in Language.Haskell.TH.Quote > >> but arguably 'dataToExpQ' doesn't belong in this module either, > >> and it only lives there because it is a useful function for defining > >> quasiquoters and it was described in the quasiquoting paper. > >> > >> I might propose getting rid of the 'Lift' class entirely, but you > >> might prefer that class since it doesn't go through SYB (and have > >> the attendant slowdown). > >> > >> This mode of use of 'dataToExpQ' deserves more attention. > >> > >> Discussion period: 1 month > >> > >> Cheers, > >> Edward > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From ekmett at gmail.com Wed May 13 09:40:36 2015 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 13 May 2015 05:40:36 -0400 Subject: Proposal: liftData for Template Haskell In-Reply-To: References: <1429269330-sup-7487@sabre> <1430796774-sup-3968@sabre> <5552EB16.5010606@earth.li> Message-ID: I'm very much +1 to the addition of liftData, but pretty strongly -1 on the notion of using an overlappable instance for Data a => Lift a. I really don't like encouraging that practice, orphans then start changing behavior in ways that make me deeply uncomfortable. On the other hand, placing a default signature: class Lift a where lift :: a -> Q Exp default lift :: Data a => a -> Q Exp lift = liftData would be something I'd be 100% behind, in addition to adding the explicit liftData export. It'd rather sharply reduce the pain of defining Lift instances and I was actually surprised it wasn't there. -Edward On Wed, May 13, 2015 at 2:28 AM, Gershom B wrote: > On May 13, 2015 at 2:15:26 AM, Ganesh Sittampalam (ganesh at earth.li) wrote: > > -1: I think that kind of instance (Foo a => Bar a) is generally quite > > problematic so there should be a pretty strong case to support it. > > > > There can only be one of them - if some other class shows up that can > > also provide an equally good implementation of 'lift', there's a > conflict. > > > > Also won't people get misleading error messages, implying they should > > implement Data when Lift would do? > > > > "instance Lift X where lift = liftData" doesn't seem too onerous to > > write by hand to me, though I guess it may be hard to discover that's an > > option. > > Isn?t this a case where -XDefaultSignatures and -XDeriveAnyClass can make > things at least a bit nicer? Of course we still need to document well how > to use them, but such instrumentation of the Lift class would at least make > it clear that it is ?intended? to be used in such a fashion :-) > > -g > > > On 05/05/2015 04:36, Edward Z. Yang wrote: > > > Hello all, > > > > > > It looks like people are opposed to doing with the lift type-class. > > > So here is a counterproposal: mark the Lift type class as overlappable, > > > and define an instance: > > > > > > instance Data a => Lift a where > > > ... > > > > > > This is fairly desirable, since GHC will sometimes generate a call to > > > 'lift', in which case liftData can't be manually filled in. People > > > can still define efficient versions of lift. > > > > > > Edward > > > > > > Excerpts from Edward Z. Yang's message of 2015-04-17 04:21:16 -0700: > > >> I propose adding the following function to Language.Haskell.TH: > > >> > > >> -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which > > >> -- works for any type with a 'Data' instance. > > >> liftData :: Data a => a -> Q Exp > > >> liftData = dataToExpQ (const Nothing) > > >> > > >> I don't really know which submodule this should come from; > > >> since it uses 'dataToExpQ', you might put it in > Language.Haskell.TH.Quote > > >> but arguably 'dataToExpQ' doesn't belong in this module either, > > >> and it only lives there because it is a useful function for defining > > >> quasiquoters and it was described in the quasiquoting paper. > > >> > > >> I might propose getting rid of the 'Lift' class entirely, but you > > >> might prefer that class since it doesn't go through SYB (and have > > >> the attendant slowdown). > > >> > > >> This mode of use of 'dataToExpQ' deserves more attention. > > >> > > >> Discussion period: 1 month > > >> > > >> Cheers, > > >> Edward > > > _______________________________________________ > > > Libraries mailing list > > > Libraries at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > > > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From merijn at inconsistent.nl Wed May 13 10:13:33 2015 From: merijn at inconsistent.nl (Merijn Verstraaten) Date: Wed, 13 May 2015 12:13:33 +0200 Subject: Proposal: liftData for Template Haskell In-Reply-To: References: <1429269330-sup-7487@sabre> <1430796774-sup-3968@sabre> <5552EB16.5010606@earth.li> Message-ID: Is there any reason we can't have GHC derive Lift instance automatically? I know there's already a TH library for this, but I guess I don't see why GHC can't derive them for us. Additionally, the lack of lift instances is really a pain for a lot of compile time evaluation tricks. Cheers, Merijn > On 13 May 2015, at 11:40, Edward Kmett wrote: > > I'm very much +1 to the addition of liftData, but pretty strongly -1 on the notion of using an overlappable instance for Data a => Lift a. I really don't like encouraging that practice, orphans then start changing behavior in ways that make me deeply uncomfortable. > > On the other hand, placing a default signature: > > class Lift a where > lift :: a -> Q Exp > default lift :: Data a => a -> Q Exp > lift = liftData > > would be something I'd be 100% behind, in addition to adding the explicit liftData export. It'd rather sharply reduce the pain of defining Lift instances and I was actually surprised it wasn't there. > > -Edward > > On Wed, May 13, 2015 at 2:28 AM, Gershom B wrote: > On May 13, 2015 at 2:15:26 AM, Ganesh Sittampalam (ganesh at earth.li) wrote: > > -1: I think that kind of instance (Foo a => Bar a) is generally quite > > problematic so there should be a pretty strong case to support it. > > > > There can only be one of them - if some other class shows up that can > > also provide an equally good implementation of 'lift', there's a conflict. > > > > Also won't people get misleading error messages, implying they should > > implement Data when Lift would do? > > > > "instance Lift X where lift = liftData" doesn't seem too onerous to > > write by hand to me, though I guess it may be hard to discover that's an > > option. > > Isn?t this a case where -XDefaultSignatures and -XDeriveAnyClass can make things at least a bit nicer? Of course we still need to document well how to use them, but such instrumentation of the Lift class would at least make it clear that it is ?intended? to be used in such a fashion :-) > > -g > > > On 05/05/2015 04:36, Edward Z. Yang wrote: > > > Hello all, > > > > > > It looks like people are opposed to doing with the lift type-class. > > > So here is a counterproposal: mark the Lift type class as overlappable, > > > and define an instance: > > > > > > instance Data a => Lift a where > > > ... > > > > > > This is fairly desirable, since GHC will sometimes generate a call to > > > 'lift', in which case liftData can't be manually filled in. People > > > can still define efficient versions of lift. > > > > > > Edward > > > > > > Excerpts from Edward Z. Yang's message of 2015-04-17 04:21:16 -0700: > > >> I propose adding the following function to Language.Haskell.TH: > > >> > > >> -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which > > >> -- works for any type with a 'Data' instance. > > >> liftData :: Data a => a -> Q Exp > > >> liftData = dataToExpQ (const Nothing) > > >> > > >> I don't really know which submodule this should come from; > > >> since it uses 'dataToExpQ', you might put it in Language.Haskell.TH.Quote > > >> but arguably 'dataToExpQ' doesn't belong in this module either, > > >> and it only lives there because it is a useful function for defining > > >> quasiquoters and it was described in the quasiquoting paper. > > >> > > >> I might propose getting rid of the 'Lift' class entirely, but you > > >> might prefer that class since it doesn't go through SYB (and have > > >> the attendant slowdown). > > >> > > >> This mode of use of 'dataToExpQ' deserves more attention. > > >> > > >> Discussion period: 1 month > > >> > > >> Cheers, > > >> Edward > > > _______________________________________________ > > > Libraries mailing list > > > Libraries at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > > > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From lemming at henning-thielemann.de Wed May 13 11:50:17 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Wed, 13 May 2015 13:50:17 +0200 (CEST) Subject: maintaining pre-AMP+FTP-Prelude in external package Message-ID: The Prelude of GHC-7.10/base-4.8 introduces several name clashes mostly due to the AMP and FTP: (<*) clashes with Accelerate, (<*>) clashes with NumericPrelude, (and (<>) would clash with HMatrix if added to Prelude), 'join', 'pure', 'traverse', 'fold' clash with custom defined functions. "import Prelude hiding (pure)" is not yet supported by GHC-7.4 (as shipped with Ubuntu 12.04) and in newer GHC versions it generates an annoying warning. The only remaining option is to explicitly import identifiers from Prelude. What about maintaining the pre-AMP+FTP-Prelude in a package on Hackage? Then we could maintain compatibility with a range of GHC versions by disabling import of Prelude and importing preamplified (so to speak) Prelude. The base-compat package seems to support the other way round, that is, providing new 'base' functions to old compilers. From mail at joachim-breitner.de Wed May 13 13:50:45 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 13 May 2015 15:50:45 +0200 Subject: maintaining pre-AMP+FTP-Prelude in external package In-Reply-To: References: Message-ID: <1431525045.2106.5.camel@joachim-breitner.de> Hi, Am Mittwoch, den 13.05.2015, 13:50 +0200 schrieb Henning Thielemann: > What about maintaining the pre-AMP+FTP-Prelude in a package on Hackage? > Then we could maintain compatibility with a range of GHC versions by > disabling import of Prelude and importing preamplified (so to speak) > Prelude. The base-compat package seems to support the other way round, > that is, providing new 'base' functions to old compilers. if you make that package reexport all modules from base, you would not even have to change your old code, but only change the dependency from "base" to your "base-preAMPlified". With module-rexports? this is even possible by simply listing all modules in the cabal file. It also seems to be similar to the idea of frozen-base packages, as in https://mail.haskell.org/pipermail/haskell-cafe/2015-February/118364.html Greetings, Joachim ? see https://ghc.haskell.org/trac/ghc/wiki/ModuleReexports and https://ghc.haskell.org/trac/ghc/ticket/8407 -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ekmett at gmail.com Wed May 13 14:25:00 2015 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 13 May 2015 10:25:00 -0400 Subject: maintaining pre-AMP+FTP-Prelude in external package In-Reply-To: References: Message-ID: Keep in mind such a package would have to either supply its own definition for Monad, cutting it off from the rest of the world almost entirely and making it near impossible to use with almost any of the existing libraries, or users would have to give up defining any new monads. In the first situation: Its notion of Monad would need to be picked by do sugar. Today, this would mean using RebindableSyntax in all the client code. Even then the semantics are subtly wrong. With RebindableSyntax you get whatever one is in local lexical scope, not your pre-selected (>>=) and return, so you'd still be able to see through the facade pretty easily. So, yes, someone could write and maintain a package that doesn't work with anything else and put that package on hackage for backwards compatibility. In fact, we have two packages (haskell98 and haskell2010) that are no longer considered core libraries, which an interested party could step up to turn into something along these lines. Herbert spent some time working on implementing a proper version of such a vision and filed a couple of issues on GHC for features he'd need to make it viable, but it hasn't happened yet and is a rather monolithic task. Also due to changes in Num, such a package isn't _really_ haskell98 or haskell2010 anyways, so to do the pedantic version you'd need to supply your own Num and instances. In the second: Alternately, a thinner shim could be written, which just uses the existing classes with their semantic changes and tries not to export anything different than before. This design winds up stitched out of compromises, but could be written and maintained by an interested party out on hackage with no tooling required. While nothing is stopping someone from going off and pursuing these options, from a POSIWID perspective the net result is introducing fragmentation in the name of avoiding a few names, and even if someone invests all of the effort to make it happen, it seems to me about half of the parties interested in such a design would want the other of the two options laid out here, exacerbating the fragmentation issue -Edward On Wed, May 13, 2015 at 7:50 AM, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > The Prelude of GHC-7.10/base-4.8 introduces several name clashes mostly > due to the AMP and FTP: (<*) clashes with Accelerate, (<*>) clashes with > NumericPrelude, (and (<>) would clash with HMatrix if added to Prelude), > 'join', 'pure', 'traverse', 'fold' clash with custom defined functions. > "import Prelude hiding (pure)" is not yet supported by GHC-7.4 (as shipped > with Ubuntu 12.04) and in newer GHC versions it generates an annoying > warning. The only remaining option is to explicitly import identifiers from > Prelude. > > What about maintaining the pre-AMP+FTP-Prelude in a package on Hackage? > Then we could maintain compatibility with a range of GHC versions by > disabling import of Prelude and importing preamplified (so to speak) > Prelude. The base-compat package seems to support the other way round, that > is, providing new 'base' functions to old compilers. > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Wed May 13 14:38:14 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Wed, 13 May 2015 16:38:14 +0200 (CEST) Subject: maintaining pre-AMP+FTP-Prelude in external package In-Reply-To: References: Message-ID: On Wed, 13 May 2015, Edward Kmett wrote: > Keep in mind such a package would have to either supply its own definition for Monad, cutting it off from the > rest of the world almost entirely and making it near impossible to use with almost any of the existing > libraries, or users would have to give up defining any new monads. I am only concerned with exported identifiers. Monad class would remain a sub-class of Applicative, but Applicative is not exported by PreludePreAMP. From eir at cis.upenn.edu Wed May 13 15:36:15 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 13 May 2015 11:36:15 -0400 Subject: Proposal: liftData for Template Haskell In-Reply-To: References: <1429269330-sup-7487@sabre> <1430796774-sup-3968@sabre> <5552EB16.5010606@earth.li> Message-ID: <14B003C8-B96D-48B8-85C7-5D0B72982A46@cis.upenn.edu> On May 13, 2015, at 6:13 AM, Merijn Verstraaten wrote: > Is there any reason we can't have GHC derive Lift instance automatically? I know there's already a TH library for this, but I guess I don't see why GHC can't derive them for us. Additionally, the lack of lift instances is really a pain for a lot of compile time evaluation tricks. GHC *could* do this automatically, but I don't think it *should*. `Lift` is far enough out of the way that I don't think there should be built-in compiler support for it. Doing it in a library is enough. But, DeriveAnyClass + DefaultSignatures can make it feel like GHC is doing it for you. That is, if we have > class Lift a where > lift :: a -> Q Exp > default lift :: Data a => a -> Q Exp > lift = liftData (as previously proposed) and you have -XDeriveAnyClass turned on, then you can say > data Foo a = K1 Int Bool | K2 | K3 a > deriving Lift and it should Just Work. Richard > > Cheers, > Merijn > >> On 13 May 2015, at 11:40, Edward Kmett wrote: >> >> I'm very much +1 to the addition of liftData, but pretty strongly -1 on the notion of using an overlappable instance for Data a => Lift a. I really don't like encouraging that practice, orphans then start changing behavior in ways that make me deeply uncomfortable. >> >> On the other hand, placing a default signature: >> >> class Lift a where >> lift :: a -> Q Exp >> default lift :: Data a => a -> Q Exp >> lift = liftData >> >> would be something I'd be 100% behind, in addition to adding the explicit liftData export. It'd rather sharply reduce the pain of defining Lift instances and I was actually surprised it wasn't there. >> >> -Edward >> >> On Wed, May 13, 2015 at 2:28 AM, Gershom B wrote: >> On May 13, 2015 at 2:15:26 AM, Ganesh Sittampalam (ganesh at earth.li) wrote: >>> -1: I think that kind of instance (Foo a => Bar a) is generally quite >>> problematic so there should be a pretty strong case to support it. >>> >>> There can only be one of them - if some other class shows up that can >>> also provide an equally good implementation of 'lift', there's a conflict. >>> >>> Also won't people get misleading error messages, implying they should >>> implement Data when Lift would do? >>> >>> "instance Lift X where lift = liftData" doesn't seem too onerous to >>> write by hand to me, though I guess it may be hard to discover that's an >>> option. >> >> Isn?t this a case where -XDefaultSignatures and -XDeriveAnyClass can make things at least a bit nicer? Of course we still need to document well how to use them, but such instrumentation of the Lift class would at least make it clear that it is ?intended? to be used in such a fashion :-) >> >> -g >> >>> On 05/05/2015 04:36, Edward Z. Yang wrote: >>>> Hello all, >>>> >>>> It looks like people are opposed to doing with the lift type-class. >>>> So here is a counterproposal: mark the Lift type class as overlappable, >>>> and define an instance: >>>> >>>> instance Data a => Lift a where >>>> ... >>>> >>>> This is fairly desirable, since GHC will sometimes generate a call to >>>> 'lift', in which case liftData can't be manually filled in. People >>>> can still define efficient versions of lift. >>>> >>>> Edward >>>> >>>> Excerpts from Edward Z. Yang's message of 2015-04-17 04:21:16 -0700: >>>>> I propose adding the following function to Language.Haskell.TH: >>>>> >>>>> -- | 'liftData' is a variant of 'lift' in the 'Lift' type class which >>>>> -- works for any type with a 'Data' instance. >>>>> liftData :: Data a => a -> Q Exp >>>>> liftData = dataToExpQ (const Nothing) >>>>> >>>>> I don't really know which submodule this should come from; >>>>> since it uses 'dataToExpQ', you might put it in Language.Haskell.TH.Quote >>>>> but arguably 'dataToExpQ' doesn't belong in this module either, >>>>> and it only lives there because it is a useful function for defining >>>>> quasiquoters and it was described in the quasiquoting paper. >>>>> >>>>> I might propose getting rid of the 'Lift' class entirely, but you >>>>> might prefer that class since it doesn't go through SYB (and have >>>>> the attendant slowdown). >>>>> >>>>> This mode of use of 'dataToExpQ' deserves more attention. >>>>> >>>>> Discussion period: 1 month >>>>> >>>>> Cheers, >>>>> Edward >>>> _______________________________________________ >>>> Libraries mailing list >>>> Libraries at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>>> >>> >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries From hvr at gnu.org Wed May 13 22:12:11 2015 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Thu, 14 May 2015 00:12:11 +0200 Subject: maintaining pre-AMP+FTP-Prelude in external package In-Reply-To: (Henning Thielemann's message of "Wed, 13 May 2015 16:38:14 +0200 (CEST)") References: Message-ID: <87egmkcekk.fsf@gnu.org> On 2015-05-13 at 16:38:14 +0200, Henning Thielemann wrote: > On Wed, 13 May 2015, Edward Kmett wrote: > >> Keep in mind such a package would have to either supply its own definition for Monad, cutting it off from the >> rest of the world almost entirely and making it near impossible to use with almost any of the existing >> libraries, or users would have to give up defining any new monads. > > I am only concerned with exported identifiers. Monad class would > remain a sub-class of Applicative, but Applicative is not exported by > PreludePreAMP. But then you won't be able to define Monad instances... :-/ See also the discussion in https://ghc.haskell.org/trac/ghc/ticket/9590 From felipe.lessa at gmail.com Thu May 14 02:20:25 2015 From: felipe.lessa at gmail.com (Felipe Lessa) Date: Wed, 13 May 2015 23:20:25 -0300 Subject: maintaining pre-AMP+FTP-Prelude in external package In-Reply-To: <87egmkcekk.fsf@gnu.org> References: <87egmkcekk.fsf@gnu.org> Message-ID: <55540669.5060401@gmail.com> On 13-05-2015 19:12, Herbert Valerio Riedel wrote: > On 2015-05-13 at 16:38:14 +0200, Henning Thielemann wrote: >> I am only concerned with exported identifiers. Monad class would >> remain a sub-class of Applicative, but Applicative is not exported by >> PreludePreAMP. > > But then you won't be able to define Monad instances... :-/ Just import Applicative from its own module, like we did before GHC 7.10. AFAIU, the point of having PreludePreAMP would be facilitating maintenance of code that works both on 7.10 and <= 7.8 without warnings. One would still need to know that for defining a Monad they need Applicative. That was the best practice for a long time, anyway. Cheers, -- Felipe. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From lemming at henning-thielemann.de Thu May 14 05:42:53 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Thu, 14 May 2015 07:42:53 +0200 (CEST) Subject: maintaining pre-AMP+FTP-Prelude in external package In-Reply-To: <87egmkcekk.fsf@gnu.org> References: <87egmkcekk.fsf@gnu.org> Message-ID: On Thu, 14 May 2015, Herbert Valerio Riedel wrote: > On 2015-05-13 at 16:38:14 +0200, Henning Thielemann wrote: >> >> I am only concerned with exported identifiers. Monad class would >> remain a sub-class of Applicative, but Applicative is not exported by >> PreludePreAMP. > > But then you won't be able to define Monad instances... :-/ I would import Applicative from Control.Applicative like I did until GHC-7.8. From vogt.adam at gmail.com Thu May 14 22:11:57 2015 From: vogt.adam at gmail.com (adam vogt) Date: Thu, 14 May 2015 18:11:57 -0400 Subject: Proposal: liftData for Template Haskell In-Reply-To: References: <1429269330-sup-7487@sabre> <1430796774-sup-3968@sabre> <5552EB16.5010606@earth.li> Message-ID: On Wed, May 13, 2015 at 6:13 AM, Merijn Verstraaten wrote: > > Is there any reason we can't have GHC derive Lift instance automatically? I know there's already a TH library for this, but I guess I don't see why GHC can't derive them for us. Additionally, the lack of lift instances is really a pain for a lot of compile time evaluation tricks. There are two libraries to get your orphan Lift instances: th-orphans and th-lift-instances. Here's a list of what should be all the Lift instances on hackage: < http://code.haskell.org/~aavogt/lift_instances_hackage.txt>. There are many duplicates. Text and Double instances happen quite often (9 in ~ 715 packages depending on template-haskell): Text (Lazy or Strict) ./aeson-schema-0.3.0.5/src/Data/Aeson/TH/Lift.hs ./parse-help-0.0/System/Console/ParseHelp.hs ./ssh-0.3.1/test/EmbedTree.hs ./SimpleLog-0.1.0.3/src/System/Log/SLog/Format.hs ./persistent-template-2.1.3/Database/Persist/TH.hs -- indirectly via a Lift' class ./th-lift-instances-0.1.5/src/Instances/TH/Lift.hs ./typedquery-0.1.0.2/src/Database/TypedQuery/Types.hs ./haskhol-core-1.1.0/src/HaskHOL/Core/Kernel/Prims.hs ./lighttpd-conf-0.4/src/Lighttpd/Conf/Instances/Lift.hs ./yaml-rpc-1.0.3/Network/YAML/API.hs Double ./aeson-schema-0.3.0.5/src/Data/Aeson/TH/Lift.hs ./th-orphans-0.11.1/src/Language/Haskell/TH/Instances.hs ./paragon-0.1.28/src/Language/Java/Paragon/QuasiQuoter/Lift.hs ./ta-0.1/Database/TA/Helper/LiftQ.hs ./CCA-0.1.5.3/src/Language/Haskell/TH/Instances.hs ./th-lift-instances-0.1.5/src/Instances/TH/Lift.hs ./llvm-general-quote-0.2.0.0/src/LLVM/General/Quote/AST.hs ./Eq-1.1.3/Language/Eq/Quasiquote.hs ./ivory-0.1.0.0/src/Ivory/Language/Syntax/AST.hs Those orphans could be removed if we had instance Data a => Lift a, because those types have a mostly sane Data instance. While there are some differences in the Double instances: lift x = [| read $(lift (show x)) |] -- in ./paragon-0.1.28/src/Language/Java/Paragon/QuasiQuoter/Lift.hs lift d = [| D# $(return (LitE (DoublePrimL (toRational d)))) |] lift d = [| fromRational $(litE . rationalL . toRational $ d) :: Double |] lift = lift . toRational $(lift (0/0)) is wrong for most instances already: going through Rational gives -Infinity. Incidentally dataToExpQ (const Nothing) (0/0) also gives -Infinity. Another difference is whether :t $(lift (1.0 :: Double)) is Double, Fractional a => a, Read a => a, or something else. Apart from those two issues, it seems that the duplicated Lift instances do the same thing as liftData, so the "adding an import will change the program's runtime behavior" seems rare compared with "adding an import breaks your program because we didn't agree to put the orphans in one package only". So I'm: +1 on Data a => Lift a +1 on the DefaultSignatures, if the overlapping instance won't happen -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.doel at gmail.com Mon May 18 00:08:04 2015 From: dan.doel at gmail.com (Dan Doel) Date: Sun, 17 May 2015 20:08:04 -0400 Subject: IsString [Char] instance Message-ID: Greetings, Today, someone came into #haskell and asked why they couldn't type the equivalent of: > "hi" ++ "bye" into GHCi with OverloadedStrings enabled. The answer is that it's ambiguous, because (++) only determines the strings to be [a], and not [Char]. I noticed that this could actually be solved by making the instance: instance (a ~ Char) => IsString [a] where ... Which causes [Char] to be inferred as soon as [a] is. I then searched my libraries mail and noticed that we'd discussed this two years ago. The proposal for this instance change was rejected based on ExtendedDefaultRules being beefed up to solve this case. But then no one actually implemented the better defaulting. So, I'm proposing that the issue be fixed for real. I'm not terribly concerned with how it gets fixed, but there's not a great reason for this to not behave better than it currently does. If someone steps up and makes defaulting better, than that's great. But if not, then the libraries committee can fix this very easily for GHC 7.12, and I think it's better to do so than to wait if there are no signs that the alternative is going to happen. I don't think we need to nail down which of the two solutions we're going to choose right now, but it'd be good to resolve that we're going to fix it, one way or another, by some well defined date. Here's a link to the previous discussion: http://comments.gmane.org/gmane.comp.lang.haskell.libraries/20088 Discussion period: 2 weeks -- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Mon May 18 01:36:30 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 18 May 2015 11:36:30 +1000 Subject: IsString [Char] instance In-Reply-To: References: Message-ID: +1 from me. Let's resolve to do something about the situation before 7.12 ships. I'd definitely prefer some kind of smarter defaulting, because that'd also potentially address the length "foo" overloaded string problem that got worse with the Foldable/Traversable Proposal, but even just the instance (a ~ Char) => IsString [a] solution goes a long way and has the benefit that it could be implemented today without having to figure out and test complex defaulting rules. -Edward On Mon, May 18, 2015 at 10:08 AM, Dan Doel wrote: > Greetings, > > Today, someone came into #haskell and asked why they couldn't type the > equivalent of: > > > "hi" ++ "bye" > > into GHCi with OverloadedStrings enabled. The answer is that it's > ambiguous, because (++) only determines the strings to be [a], and not > [Char]. > > I noticed that this could actually be solved by making the instance: > > instance (a ~ Char) => IsString [a] where ... > > Which causes [Char] to be inferred as soon as [a] is. I then searched my > libraries mail and noticed that we'd discussed this two years ago. The > proposal for this instance change was rejected based on > ExtendedDefaultRules being beefed up to solve this case. But then no one > actually implemented the better defaulting. > > So, I'm proposing that the issue be fixed for real. I'm not terribly > concerned with how it gets fixed, but there's not a great reason for this > to not behave better than it currently does. If someone steps up and makes > defaulting better, than that's great. But if not, then the libraries > committee can fix this very easily for GHC 7.12, and I think it's better to > do so than to wait if there are no signs that the alternative is going to > happen. > > I don't think we need to nail down which of the two solutions we're going > to choose right now, but it'd be good to resolve that we're going to fix > it, one way or another, by some well defined date. > > Here's a link to the previous discussion: > > http://comments.gmane.org/gmane.comp.lang.haskell.libraries/20088 > > Discussion period: 2 weeks > > -- Dan > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon May 18 09:29:25 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 May 2015 09:29:25 +0000 Subject: IsString [Char] instance In-Reply-To: References: Message-ID: Have you tested this? If GHC sees two overlapping instances instance ... => IsString [a] instance IsString [Char] it?ll refrain from using the former until it knows that the latter can?t match. So the ?extended defaulting rules? may actually be needed. I?m not against beefing up the extended default rules if someone wants to write a specification. It wouldn?t be hard to do. Simon From: Libraries [mailto:libraries-bounces at haskell.org] On Behalf Of Edward Kmett Sent: 18 May 2015 02:37 To: Dan Doel Cc: Haskell Libraries Subject: Re: IsString [Char] instance +1 from me. Let's resolve to do something about the situation before 7.12 ships. I'd definitely prefer some kind of smarter defaulting, because that'd also potentially address the length "foo" overloaded string problem that got worse with the Foldable/Traversable Proposal, but even just the instance (a ~ Char) => IsString [a] solution goes a long way and has the benefit that it could be implemented today without having to figure out and test complex defaulting rules. -Edward On Mon, May 18, 2015 at 10:08 AM, Dan Doel > wrote: Greetings, Today, someone came into #haskell and asked why they couldn't type the equivalent of: > "hi" ++ "bye" into GHCi with OverloadedStrings enabled. The answer is that it's ambiguous, because (++) only determines the strings to be [a], and not [Char]. I noticed that this could actually be solved by making the instance: instance (a ~ Char) => IsString [a] where ... Which causes [Char] to be inferred as soon as [a] is. I then searched my libraries mail and noticed that we'd discussed this two years ago. The proposal for this instance change was rejected based on ExtendedDefaultRules being beefed up to solve this case. But then no one actually implemented the better defaulting. So, I'm proposing that the issue be fixed for real. I'm not terribly concerned with how it gets fixed, but there's not a great reason for this to not behave better than it currently does. If someone steps up and makes defaulting better, than that's great. But if not, then the libraries committee can fix this very easily for GHC 7.12, and I think it's better to do so than to wait if there are no signs that the alternative is going to happen. I don't think we need to nail down which of the two solutions we're going to choose right now, but it'd be good to resolve that we're going to fix it, one way or another, by some well defined date. Here's a link to the previous discussion: http://comments.gmane.org/gmane.comp.lang.haskell.libraries/20088 Discussion period: 2 weeks -- Dan _______________________________________________ Libraries mailing list Libraries at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Mon May 18 10:27:43 2015 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Mon, 18 May 2015 12:27:43 +0200 Subject: IsString [Char] instance In-Reply-To: (Simon Peyton Jones's message of "Mon, 18 May 2015 09:29:25 +0000") References: Message-ID: <87h9rap4dc.fsf@gnu.org> On 2015-05-18 at 11:29:25 +0200, Simon Peyton Jones wrote: > Have you tested this? If GHC sees two overlapping instances > instance ... => IsString [a] > instance IsString [Char] > it?ll refrain from using the former until it knows that the latter > can?t match. FWIW, the original stated problem of having GHC be able to infer "hi" ++ "bye" under the influence of -XOverloadedStrings which would cause GHC to complain ?:2> "foo" ++ "bar" :2:1: Non type-variable argument in the constraint: Data.String.IsString [a] (Use FlexibleContexts to permit this) When checking that ?it? has the inferred type it :: forall a. Data.String.IsString [a] => [a] is actually resolved by the following patch (which I tried w/ GHC HEAD): --8<---------------cut here---------------start------------->8--- diff --git a/libraries/base/Data/String.hs b/libraries/base/Data/String.hs index a03569f..2bed477 100644 --- a/libraries/base/Data/String.hs +++ b/libraries/base/Data/String.hs @@ -1,5 +1,5 @@ {-# LANGUAGE Trustworthy #-} -{-# LANGUAGE NoImplicitPrelude, FlexibleInstances #-} +{-# LANGUAGE NoImplicitPrelude, GADTs #-} ----------------------------------------------------------------------------- -- | @@ -34,6 +34,6 @@ import Data.List (lines, words, unlines, unwords) class IsString a where fromString :: String -> a -instance IsString [Char] where +instance (a ~ Char) => IsString [a] where fromString xs = xs --8<---------------cut here---------------end--------------->8--- From michael at snoyman.com Mon May 18 10:30:53 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 18 May 2015 10:30:53 +0000 Subject: IsString [Char] instance In-Reply-To: References: Message-ID: I'm +1 as well. I have a hard time thinking of a list of something besides `Char`s for which we'd want an `IsString` instance to work for. On Mon, May 18, 2015 at 4:36 AM Edward Kmett wrote: > +1 from me. Let's resolve to do something about the situation before 7.12 > ships. > > I'd definitely prefer some kind of smarter defaulting, because that'd also > potentially address the length "foo" overloaded string problem that got > worse with the Foldable/Traversable Proposal, but even just the > > instance (a ~ Char) => IsString [a] > > solution goes a long way and has the benefit that it could be implemented > today without having to figure out and test complex defaulting rules. > > -Edward > > On Mon, May 18, 2015 at 10:08 AM, Dan Doel wrote: > >> Greetings, >> >> Today, someone came into #haskell and asked why they couldn't type the >> equivalent of: >> >> > "hi" ++ "bye" >> >> into GHCi with OverloadedStrings enabled. The answer is that it's >> ambiguous, because (++) only determines the strings to be [a], and not >> [Char]. >> >> I noticed that this could actually be solved by making the instance: >> >> instance (a ~ Char) => IsString [a] where ... >> >> Which causes [Char] to be inferred as soon as [a] is. I then searched my >> libraries mail and noticed that we'd discussed this two years ago. The >> proposal for this instance change was rejected based on >> ExtendedDefaultRules being beefed up to solve this case. But then no one >> actually implemented the better defaulting. >> >> So, I'm proposing that the issue be fixed for real. I'm not terribly >> concerned with how it gets fixed, but there's not a great reason for this >> to not behave better than it currently does. If someone steps up and makes >> defaulting better, than that's great. But if not, then the libraries >> committee can fix this very easily for GHC 7.12, and I think it's better to >> do so than to wait if there are no signs that the alternative is going to >> happen. >> >> I don't think we need to nail down which of the two solutions we're going >> to choose right now, but it'd be good to resolve that we're going to fix >> it, one way or another, by some well defined date. >> >> Here's a link to the previous discussion: >> >> http://comments.gmane.org/gmane.comp.lang.haskell.libraries/20088 >> >> Discussion period: 2 weeks >> >> -- Dan >> >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> >> > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Mon May 18 10:43:08 2015 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Mon, 18 May 2015 12:43:08 +0200 Subject: IsString [Char] instance In-Reply-To: (Dan Doel's message of "Sun, 17 May 2015 20:08:04 -0400") References: Message-ID: <87d21yp3nn.fsf@gnu.org> On 2015-05-18 at 02:08:04 +0200, Dan Doel wrote: > Today, someone came into #haskell and asked why they couldn't type the > equivalent of: > > > "hi" ++ "bye" > > into GHCi with OverloadedStrings enabled. A minor observation: had (s)he used <> instead, GHCi would have happily inferred (and somewhat surprisingly reduced to a polymorphic "foobar"): GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help ?:2> "foo" <> "bar" "foobar" it :: (Data.String.IsString m, Monoid m) => m From gale at sefer.org Mon May 18 14:57:49 2015 From: gale at sefer.org (Yitzchak Gale) Date: Mon, 18 May 2015 17:57:49 +0300 Subject: IsString [Char] instance In-Reply-To: References: Message-ID: Michael Snoyman wrote: > I have a hard time thinking of a list of something besides > `Char`s for which we'd want an `IsString` instance to work for. I have an easy time. The `Data.Char` module is always woefully behind the Unicode standard, and there are always Unicode features that are implemented differently than what some people might want. A case in point is toUpper and toLower. Those were fixed in Data.Text, but are still broken in Data.Char. For me, so far this has not been important enough for me to need to implement my own Char type, or a newtype wrapper with improvements. But it is certainly conceivable that others might need this. Whether or not you like those examples - it is really the wrong approach to outlaw IsString [a] instance forever more for any a except Char. Yes, this is a serious problem that should be solved. But are we really giving up on doing it the right way and fixing defaulting rules? Simon wrote that it wouldn't be hard. I propose: Set a reasonable time limit for someone to step up and provide a suggested fix to the defaulting rules. If that doesn't happen, then bite the bullet and do it using the type equality Thanks, Yitz From roma at ro-che.info Mon May 18 15:19:35 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Mon, 18 May 2015 18:19:35 +0300 Subject: IsString [Char] instance In-Reply-To: References: Message-ID: <555A0307.7060504@ro-che.info> You can always newtype your list. I'm opposed to having everyone suffer today just because someone someday may want to write an alternative instance. On 18/05/15 17:57, Yitzchak Gale wrote: > Michael Snoyman wrote: >> I have a hard time thinking of a list of something besides >> `Char`s for which we'd want an `IsString` instance to work for. > > I have an easy time. The `Data.Char` module is always > woefully behind the Unicode standard, and there are > always Unicode features that are implemented differently > than what some people might want. > > A case in point is toUpper and toLower. Those were fixed > in Data.Text, but are still broken in Data.Char. > > For me, so far this has not been important enough for > me to need to implement my own Char type, or a newtype > wrapper with improvements. But it is certainly conceivable > that others might need this. > > Whether or not you like those examples - it is really the > wrong approach to outlaw IsString [a] instance forever more > for any a except Char. > > Yes, this is a serious problem that should be solved. But > are we really giving up on doing it the right way and fixing > defaulting rules? Simon wrote that it wouldn't be hard. > > I propose: Set a reasonable time limit for someone to step > up and provide a suggested fix to the defaulting rules. If that > doesn't happen, then bite the bullet and do it using > the type equality > > Thanks, > Yitz > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From dan.doel at gmail.com Mon May 18 16:22:22 2015 From: dan.doel at gmail.com (Dan Doel) Date: Mon, 18 May 2015 12:22:22 -0400 Subject: IsString [Char] instance In-Reply-To: References: Message-ID: On Mon, May 18, 2015 at 10:57 AM, Yitzchak Gale wrote: > Michael Snoyman wrote: > > I have a hard time thinking of a list of something besides > > `Char`s for which we'd want an `IsString` instance to work for. > > I have an easy time. The `Data.Char` module is always > woefully behind the Unicode standard, and there are > always Unicode features that are implemented differently > than what some people might want. > > A case in point is toUpper and toLower. Those were fixed > in Data.Text, but are still broken in Data.Char. > > For me, so far this has not been important enough for > me to need to implement my own Char type, or a newtype > wrapper with improvements. But it is certainly conceivable > that others might need this. > > Whether or not you like those examples - it is really the > wrong approach to outlaw IsString [a] instance forever more > for any a except Char. > I'm a little skeptical that someone would be concerned about proper Unicode support and still want [Codepoint]---with no abstraction barrier---to be the string type. I think it'd even be good if we could ditch [Char] as the blessed string type in Haskell altogether. But that's a much more involved process. > Yes, this is a serious problem that should be solved. But > are we really giving up on doing it the right way and fixing > defaulting rules? Simon wrote that it wouldn't be hard. > ?It doesn't really matter how hard it is unless someone actually does it. I'm not exactly sure what people want out of beefed up defaulting, and what is feasible, either. Fixing 'show ("fizz" ++ "buzz")' is easy. If it also has to fix 'length "hello"', that may be more tricky, although it's something that the instance solution isn't really capable of fixing. So it's best not to get hung up on that. I propose: Set a reasonable time limit for someone to step > up and provide a suggested fix to the defaulting rules. If that > doesn't happen, then bite the bullet and do it using > the type equality > What is a reasonable time limit? It's been two years already. 7.12 is probably between 1/2 and 1 more. Do we really need more than that? -- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon May 18 16:44:18 2015 From: david.feuer at gmail.com (David Feuer) Date: Mon, 18 May 2015 12:44:18 -0400 Subject: IsString [Char] instance In-Reply-To: References: Message-ID: I think we should ditch the notion that it's valid to round-trip from Word32 to Char and back. Char should *only* take on valid codepoints (although not only assigned ones). This will break some code (particularly some weird stuff with file names, IIRC), but I think that code is broken already. I don't, however, think that will fully address Yitzchak's concerns. On Mon, May 18, 2015 at 12:22 PM, Dan Doel wrote: > > On Mon, May 18, 2015 at 10:57 AM, Yitzchak Gale wrote: >> >> Michael Snoyman wrote: >> > I have a hard time thinking of a list of something besides >> > `Char`s for which we'd want an `IsString` instance to work for. >> >> I have an easy time. The `Data.Char` module is always >> woefully behind the Unicode standard, and there are >> always Unicode features that are implemented differently >> than what some people might want. >> >> A case in point is toUpper and toLower. Those were fixed >> in Data.Text, but are still broken in Data.Char. >> >> For me, so far this has not been important enough for >> me to need to implement my own Char type, or a newtype >> wrapper with improvements. But it is certainly conceivable >> that others might need this. >> >> Whether or not you like those examples - it is really the >> wrong approach to outlaw IsString [a] instance forever more >> for any a except Char. > > > I'm a little skeptical that someone would be concerned about proper Unicode > support and still want [Codepoint]---with no abstraction barrier---to be the > string type. I think it'd even be good if we could ditch [Char] as the > blessed string type in Haskell altogether. But that's a much more involved > process. > >> >> Yes, this is a serious problem that should be solved. But >> are we really giving up on doing it the right way and fixing >> defaulting rules? Simon wrote that it wouldn't be hard. > > > It doesn't really matter how hard it is unless someone actually does it. > > I'm not exactly sure what people want out of beefed up defaulting, and what > is feasible, either. Fixing 'show ("fizz" ++ "buzz")' is easy. If it also > has to fix 'length "hello"', that may be more tricky, although it's > something that the instance solution isn't really capable of fixing. So it's > best not to get hung up on that. > >> I propose: Set a reasonable time limit for someone to step >> up and provide a suggested fix to the defaulting rules. If that >> doesn't happen, then bite the bullet and do it using >> the type equality > > > What is a reasonable time limit? It's been two years already. 7.12 is > probably between 1/2 and 1 more. Do we really need more than that? > > -- Dan > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > From adam at bergmark.nl Mon May 25 22:13:09 2015 From: adam at bergmark.nl (Adam Bergmark) Date: Tue, 26 May 2015 00:13:09 +0200 Subject: aeson-utils: aeson-0.9 compatibility package Message-ID: Hi folks, aeson-0.9 was released today. It changes the behavior of `decode` and `eitherDecode`: They now both allow atomic values on the top level. Previously the behavior was inconsistent, `encode 1 = "1"` but `decode "1" :: Maybe Int = Nothing`. aeson-utils[1] has the functions `decodeV` and `eitherDecodeV` that work like aeson-0.9's `decode` and `eitherDecode` respectively, also with older aeson versions. If you want to stay backwards compatible without having varying semantics based on dependencies, check it out! [1] http://hackage.haskell.org/package/aeson-utils Regards, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu May 28 13:31:37 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 28 May 2015 16:31:37 +0300 Subject: mapM /= traverse? In-Reply-To: References: <5550FFEF.1000806@gmail.com> Message-ID: <556718B9.30109@gmail.com> Yes, we also need mapM_ = traverse_. Simon On 26/05/2015 21:59, Dan Doel wrote: > I'm going to submit a ticket for this. However, I have a related question: > > Do you care about mapM_? Right now it's defined as: > > mapM_ f = foldr ((>>) . f) (return ()) > > whereas it could be: > > mapM_ = traverse_ > > Does this not affect you in the same way (because (>>) allows the same > optimization as Applicative)? Or does this also need to be addressed? > > -- Dan > > On Mon, May 11, 2015 at 3:15 PM, Simon Marlow > wrote: > > I was hoping that in GHC 7.10 we would make mapM = traverse for > lists, but it appears this isn't the case: the Traversable instance > for lists overrides mapM to be the manually-defined version in terms > of foldr. > > Why is this? Fusion? > > Unfortunately since I want mapM = traverse (for Haxl) I'll need to > continue to redefine it in our custom Prelude. > > Cheers, > Simon > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > From dan.doel at gmail.com Tue May 26 18:59:45 2015 From: dan.doel at gmail.com (Dan Doel) Date: Tue, 26 May 2015 14:59:45 -0400 Subject: mapM /= traverse? In-Reply-To: <5550FFEF.1000806@gmail.com> References: <5550FFEF.1000806@gmail.com> Message-ID: I'm going to submit a ticket for this. However, I have a related question: Do you care about mapM_? Right now it's defined as: mapM_ f = foldr ((>>) . f) (return ()) whereas it could be: mapM_ = traverse_ Does this not affect you in the same way (because (>>) allows the same optimization as Applicative)? Or does this also need to be addressed? -- Dan On Mon, May 11, 2015 at 3:15 PM, Simon Marlow wrote: > I was hoping that in GHC 7.10 we would make mapM = traverse for lists, but > it appears this isn't the case: the Traversable instance for lists > overrides mapM to be the manually-defined version in terms of foldr. > > Why is this? Fusion? > > Unfortunately since I want mapM = traverse (for Haxl) I'll need to > continue to redefine it in our custom Prelude. > > Cheers, > Simon > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vogt.adam at gmail.com Fri May 29 23:30:57 2015 From: vogt.adam at gmail.com (adam vogt) Date: Fri, 29 May 2015 19:30:57 -0400 Subject: Template Haskell changes to names and package keys In-Reply-To: <1430837344-sup-3322@sabre> References: <1430500118-sup-420@sabre> <20150502093814.GA1771@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> <50ae1abd4ebf4c54ad679f4a5b74e7ca@DB4PR30MB030.064d.mgd.msft.net> <1430837344-sup-3322@sabre> Message-ID: On Tue, May 5, 2015 at 10:50 AM, Edward Z. Yang wrote: > Excerpts from Simon Peyton Jones's message of 2015-05-05 01:15:50 -0700: > > Very good exercise! Looking for how an existing API is used (perhaps in > a clumsy way, because of the inadequacies of the existing API) is a good > guide to improving it. > > > > e.g. If tuple names are an issue, let?s provide a TH API for getting > their names!! > > Hello Simon, > > The right and proper way of getting a tuple name (well, constructor > really, but that's the only reason people want names) should be: > > [| (,) |] > > But people sometimes don't want to use quotes, e.g. as in > https://github.com/ekmett/lens/issues/496 where they want to > work with stage1 GHC. > > So in this case, https://ghc.haskell.org/trac/ghc/ticket/10382 (making > quotes work with stage1 GHC) will help a lot. > Language.Haskell.TH already exports tupleTypeName tupleDataName -------------- next part -------------- An HTML attachment was scrubbed... URL: