From J.Hage at uu.nl Mon Oct 1 08:35:56 2018 From: J.Hage at uu.nl (Jurriaan Hage) Date: Mon, 1 Oct 2018 10:35:56 +0200 Subject: A question about run-time errors when class members are undefined Message-ID: <52E76BE8-D80A-43D3-BB2C-D0207577A647@uu.nl> Hello, We are adding classes and instances to Helium. We wondered about the aspect that it is allowed to have a class instance of which not all fields have a piece of code/value associated with them, and that as a result when you happen to call these, a run-time error results. (see Sec. 4.3.2 of the Haskell 2010 report). Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. best, Jur From cgibbard at gmail.com Mon Oct 1 09:19:44 2018 From: cgibbard at gmail.com (Cale Gibbard) Date: Mon, 1 Oct 2018 05:19:44 -0400 Subject: A question about run-time errors when class members are undefined In-Reply-To: <52E76BE8-D80A-43D3-BB2C-D0207577A647@uu.nl> References: <52E76BE8-D80A-43D3-BB2C-D0207577A647@uu.nl> Message-ID: This and the fact that you may leave record fields unspecified when initially constructing a record are two things I'd probably change if I could. In the rare case of a class with a method that will usually be an error, you could still define that as the default method implementation in the class. On Mon, Oct 1, 2018, 04:36 Jurriaan Hage, wrote: > Hello, > > We are adding classes and instances to Helium. > > We wondered about the aspect that it is allowed to have a class instance > of which not all fields have a piece of code/value associated with them, > and > that as a result when you happen to call these, a run-time error results. > (see Sec. 4.3.2 of the Haskell 2010 report). > > Does anyone know of a rationale for this choice, since it seems rather > unhaskell-like. > > best, > Jur > > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony_clayden at clear.net.nz Thu Oct 4 01:55:48 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Thu, 4 Oct 2018 14:55:48 +1300 Subject: A question about run-time errors when class members are undefined Message-ID: > We are adding classes and instances to Helium. > We wondered about the aspect that it is allowed to have a class instance > of which not all fields have a piece of code/value associated with them, ... I have a suggestion for that. But first let me understand where you're going with Helium. Are you aiming to slavishly reproduce Haskell's classes/instances, or is this a chance for a rethink? Will you want to include associated types and associated datatypes in the classes? Note those are just syntactic sugar for top-level type families and data families. It does aid readability to put them within the class. I would certainly rethink the current grouping of methods into classes. Number purists have long wanted to split class Num into Additive vs Multiplicative. (Additive would be a superclass of Multiplicative.) For the Naturals perhaps we want Presburger arithmetic then Additive just contains (+), with `negate` certainly in a different class, perhaps (-) subtract also in a dedicated class. Also there's people wanting Monads with just `bind` not `return`. But restructuring the Prelude classes/methods is just too hard with all that legacy code. Even though you should be able to do: class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive a) => Num a Note there's a lot of classes with a single method, and that seems to be an increasing trend. Historically it wasn't so easy in Haskell to do that superclass constraints business; if it had been perhaps there would be more classes with a single method. Then there's some disadvantages to classes holding multiple methods: * the need to provide an overloading for every method, even though it may not make sense (or suffer a run-time error, as you say) * the inability to 'fine tune' methods for a specific datatype [**] * an internal compiler/object code cost of passing a group of methods in a dictionary as tuple (as apposed to directly selecting a single method) [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be controversial, but ...) Some people want to/some languages do use (+) for concatenating Strings/lists. But the other methods in `Num` don't make any sense. If all your classes have a single method, the class name would seem to be superfluous, and the class/instance decl syntax seems too verbose. So here's a suggestion. I'll need to illustrate with some definite syntax, but there's nothing necessary about it. (I'll borrow the Explicit Type Application `@`.) To give an instance overloading for method `show` or (==) show @Int = primShowInt -- in effect pattern matching on the type (==) @Int = primEqInt -- so see showList below That is: I'm giving an overloading for those methods on type `Int`. How do I declare those methods are overloadable? In their signature: show @a :: a -> String -- compare show :: Show a => a -> String (==) @a :: a -> a -> Bool Non-overladable functions don't have `@a` to the left of `::`. How do I show that a class has a superclass constraint? That is: a method has a supermethod constraint, we'll still use `=>`: show @a :: showsPrec @a => a -> String -- supermethod constraint show @[a] :: show a => [a] -> String -- instance decl, because not bare a, with constraint => show @[a] xss = showList xss (*) @a :: (+) @a => a -> a -> a Is this idea completely off the wall? Take a look at Wadler's original 1988 memo introducing what became type classes. http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt It reviews several possible designs, but not all those possibilities made it into his paper (with Stephen Blott) later in 1988/January 1989. In particular look at Section 1's 'Simple overloading'. It's what I'm suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler rejects this design because of "potential blow-ups". But he should have pushed the idea a bit further. Perhaps he was scared to allow function/method names into type signatures? (I've already sneaked that in above with constraints.) These days Haskell is getting more relaxed about namespaces: the type `@`pplication exactly allows type names appearing in terms. So to counter his example, the programmer writes: square x = x * x -- no explicit signature given square :: (*) @a => a -> a -- signature inferred, because (*) is overloaded rms = sqrt . square -- no explicit signature rms :: sqrt @a => a -> a -- signature inferred Note the inferred signature for `rms` doesn't need `(*) @a` even though it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` might also have other supermethods, that amount to `Floating`. > ... a run-time error results. > > Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. If you allow default method implementations (in the class, as Cale points out), then I guess you have to allow instance decls that don't mention all the methods. I think there should at least be a warning if there's no default method. Also beware the default method might have a more specific signature, which means it can't be applied for some particular instance. Altogether, I'd say, the culprit is the strong bias in early Haskell to bunch methods together into classes. These days with Haskell's richer/more fine-tuned typeclass features: what do typeclasses do that can't be done more precisely at method level -- indeed that would _better_ be done at method level? AntC -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Thu Oct 4 03:08:27 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 3 Oct 2018 23:08:27 -0400 Subject: Quo vadis? In-Reply-To: <56c16080-b859-33c9-798f-234590486e45@ciktel.net> References: <56c16080-b859-33c9-798f-234590486e45@ciktel.net> Message-ID: <0448613A-8843-494B-8B26-084034BCD54F@cs.brynmawr.edu> There was no Haskell 2020 meeting this year at ICFP. Sadly, interest seems to have waned here... Richard > On Sep 26, 2018, at 8:18 AM, Mario Blažević wrote: > > I could not attend ICFP this year. Has there been any discussion at all of Haskell 2020 there? If so, can the rest of us get a summary? > > > > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime From anthony_clayden at clear.net.nz Fri Oct 5 01:41:53 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Fri, 5 Oct 2018 14:41:53 +1300 Subject: Quo vadis? Message-ID: > There was no Haskell 2020 meeting this year at ICFP. Sadly, interest seems to have waned here... Yes that is sad. So either Haskell 2020 won't happen, or it'll be only minor tweaks over H2010, as that was over H98. It's hard to imagine any serious Haskelling without FlexibleInstances/FlexibleContexts, MultiParamTypeClasses, UndecidableInstances -- and yet all of those are beyond Haskell 2010. And they were available in at least 2 compilers by around 2000. (Possibly they weren't by then entirely polished. I saw a claim on a StackOverflow answer that Hugs, last release 2006, is not H2010 compliant: not true; indeed several of the H2010 changes since H98 were proposed by the Hugs dev team, and agreed by the GHC team because GHC had copied Hugs.) I've had comments from non-Haskellers that they won't take Haskell seriously, because it seems to be a bunch of 'risky'/unstable/experimental features. Whereas I know those features have been stable at least a dozen years. And yet we still face the same conundrums as did H2010: With FlexibleInstances we can write overlapping instances. With FlexibleContexts we can put overlapping constraints on an instance, even if the head's types are H98 (non-Flexible). With MultiParamTypeClasses (which were anticipated in Wadler's very earliest 1988 proposal for typeclasses) we're bound to choose FunDeps and/or type families. FunDep instances either need repeated tyvars -- which brings us straight back to FlexibleInstances, or UndecidableInstances -- which will almost certainly lead to FlexibleInstances somewhere. I feel those parts of Haskell have been in suspended animation/arrested development since ~2006: that was the last release of Hugs; the 'FunDeps via CHRs' paper [**]; the start of associated types/type families. [**] AFAICT, that paper was a purely academic exercise: GHC was not changed in light of its findings, so has a bogus implementation of FunDeps that persists to this day. The paper did not address the combo of FunDeps + Overlaps, so said nothing about what had been a stable cottage industry of type-level programming since at least 2004 (the HList paper). Is there a terms of ref for Haskell 2020? Would any of the above issues be within its scope? (Supposing there were any interest ...) AntC -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Hage at uu.nl Fri Oct 5 08:00:18 2018 From: J.Hage at uu.nl (Jurriaan Hage) Date: Fri, 5 Oct 2018 10:00:18 +0200 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: <1ACE5D80-CE74-46FB-8D99-7B661CD5C3A6@uu.nl> Hi Anthony, We first go the slavish route, to provide a basis for changing things later. So I am not looking for alternative ways of doing this, I am just wondering whether there is a rationale for doing things this way. The document does not give one. And now I hear that records suffer from the same issue (thanks Cale). We had not run into this yet, because right now Helium does not have ‘em. Both sound fishy to me and if nobody can make a case for having things this way in the first place, I wonder why it’s like that. Adding associated types is a long way off, or any such language extensions is at this point. The only one I might consider at this time is GADTs, but only if I find a master student to investigate type error diagnosis in that setting. Jur > On 4Oct, 2018, at 03:55, Anthony Clayden wrote: > > > We are adding classes and instances to Helium. > > We wondered about the aspect that it is allowed to have a class instance > > of which not all fields have a piece of code/value associated with them, ... > > I have a suggestion for that. But first let me understand where you're going with Helium. Are you aiming to slavishly reproduce Haskell's classes/instances, or is this a chance for a rethink? > > Will you want to include associated types and associated datatypes in the classes? Note those are just syntactic sugar for top-level type families and data families. It does aid readability to put them within the class. > > I would certainly rethink the current grouping of methods into classes. Number purists have long wanted to split class Num into Additive vs Multiplicative. (Additive would be a superclass of Multiplicative.) For the Naturals perhaps we want Presburger arithmetic then Additive just contains (+), with `negate` certainly in a different class, perhaps (-) subtract also in a dedicated class. Also there's people wanting Monads with just `bind` not `return`. But restructuring the Prelude classes/methods is just too hard with all that legacy code. Even though you should be able to do: > > class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive a) => Num a > > Note there's a lot of classes with a single method, and that seems to be an increasing trend. Historically it wasn't so easy in Haskell to do that superclass constraints business; if it had been perhaps there would be more classes with a single method. Then there's some disadvantages to classes holding multiple methods: > * the need to provide an overloading for every method, even though it may not make sense > (or suffer a run-time error, as you say) > * the inability to 'fine tune' methods for a specific datatype [**] > * an internal compiler/object code cost of passing a group of methods in a dictionary as tuple > (as apposed to directly selecting a single method) > > [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be controversial, but ...) Some people want to/some languages do use (+) for concatenating Strings/lists. But the other methods in `Num` don't make any sense. > > If all your classes have a single method, the class name would seem to be superfluous, and the class/instance decl syntax seems too verbose. > > So here's a suggestion. I'll need to illustrate with some definite syntax, but there's nothing necessary about it. (I'll borrow the Explicit Type Application `@`.) To give an instance overloading for method `show` or (==) > > show @Int = primShowInt -- in effect pattern matching on the type > (==) @Int = primEqInt -- so see showList below > That is: I'm giving an overloading for those methods on type `Int`. How do I declare those methods are overloadable? In their signature: > > show @a :: a -> String -- compare show :: Show a => a -> String > (==) @a :: a -> a -> Bool > Non-overladable functions don't have `@a` to the left of `::`. > How do I show that a class has a superclass constraint? That is: a method has a supermethod constraint, we'll still use `=>`: > > show @a :: showsPrec @a => a -> String -- supermethod constraint > show @[a] :: show a => [a] -> String -- instance decl, because not bare a, with constraint => > show @[a] xss = showList xss > (*) @a :: (+) @a => a -> a -> a > > Is this idea completely off the wall? Take a look at Wadler's original 1988 memo introducing what became type classes. > http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt > > It reviews several possible designs, but not all those possibilities made it into his paper (with Stephen Blott) later in 1988/January 1989. In particular look at Section 1's 'Simple overloading'. It's what I'm suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler rejects this design because of "potential blow-ups". But he should have pushed the idea a bit further. Perhaps he was scared to allow function/method names into type signatures? (I've already sneaked that in above with constraints.) These days Haskell is getting more relaxed about namespaces: the type `@`pplication exactly allows type names appearing in terms. So to counter his example, the programmer writes: > > square x = x * x -- no explicit signature given > square :: (*) @a => a -> a -- signature inferred, because (*) is overloaded > rms = sqrt . square -- no explicit signature > rms :: sqrt @a => a -> a -- signature inferred > > Note the inferred signature for `rms` doesn't need `(*) @a` even though it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` might also have other supermethods, that amount to `Floating`. > > > ... a run-time error results. > > > > Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. > > > If you allow default method implementations (in the class, as Cale points out), then I guess you have to allow instance decls that don't mention all the methods. I think there should at least be a warning if there's no default method. Also beware the default method might have a more specific signature, which means it can't be applied for some particular instance. > > Altogether, I'd say, the culprit is the strong bias in early Haskell to bunch methods together into classes. These days with Haskell's richer/more fine-tuned typeclass features: what do typeclasses do that can't be done more precisely at method level -- indeed that would _better_ be done at method level? > > > AntC > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime From anthony_clayden at clear.net.nz Fri Oct 5 10:18:51 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Fri, 5 Oct 2018 23:18:51 +1300 Subject: A question about run-time errors when class members are undefined In-Reply-To: <1ACE5D80-CE74-46FB-8D99-7B661CD5C3A6@uu.nl> References: <1ACE5D80-CE74-46FB-8D99-7B661CD5C3A6@uu.nl> Message-ID: On Fri, 5 Oct 2018 at 9:00 PM, Jurriaan Hage wrote: > > We first go the slavish route, to provide a basis for changing things > later. > > So I am not looking for alternative ways of doing this, I am just > wondering whether there is a rationale for doing things this way. > The document does not give one. > The only explanation I can think of is that there might be default implementations of the methods -- very likely defined in terms of other methods in the class. (Such as (/=) defaulting to `not (==)`, and (==) defaulting to `not (/=)`.) Then it's a nuisance to have to say 'just use the default'. But I agree GHC should cope better than a run-time exception. > And now I hear that records suffer from the same issue (thanks Cale). I'm not perturbed or surprised by that. Consider the assignments to the `zn` have the same effect here: data D = MkD {x :: Int, y :: Bool} z1 = MkD{ x = 5 } -- y not mentioned, so set undefined z2 = MkD{ x = 5, y = undefined } z3 = MkD 5 undefined We had not run into this yet, because right now Helium does not have ‘em. Haskell records were embarrassingly bad in 1998. No change or improvement in Haskell 2010. Some minor easing in recent years with GHC extensions -- I'd call that lipstick on a pig. If you've not implemented 'em yet, I just plain wouldn't. Ever. Support Lenses or any of the 50 gazillion proposals. Even Hugs' TRex is better (throws a type error at compile time if you omit a field). Both sound fishy to me and if nobody can make a case for having things this > way > in the first place, I wonder why it’s like that. > There's a huge volume of minor inconsistencies and annoyances in GHC. I guess we hardly notice because we get used to them (or we each use a subset of features). A lot can be explained by the shackle of backwards compatibility: every new extension must use distinct syntax, so that people who don't want it/aren't aware of it don't run into surprises. For example, there's now annoyingly similar-but-different semantics for H98 data, existential fields, constrained fields, GADTs, data families/instances, view patterns, pattern synonyms. I can't help but feel they should all get unified into a single semantics; then those differing syntactic forms be treated as shorthands/variations on a theme. > The only one I might consider at this time is GADTs, I do find the (~) type equality constraints from GADTs/Type Families very pleasing and intuitive. You might be able to implement that without all the other paraphernalia. AntC > > On 4Oct, 2018, at 03:55, Anthony Clayden > wrote: > > > > > We are adding classes and instances to Helium. > > > We wondered about the aspect that it is allowed to have a class > instance > > > of which not all fields have a piece of code/value associated with > them, ... > > > > I have a suggestion for that. But first let me understand where you're > going with Helium. Are you aiming to slavishly reproduce Haskell's > classes/instances, or is this a chance for a rethink? > > > > Will you want to include associated types and associated datatypes in > the classes? Note those are just syntactic sugar for top-level type > families and data families. It does aid readability to put them within the > class. > > > > I would certainly rethink the current grouping of methods into classes. > Number purists have long wanted to split class Num into Additive vs > Multiplicative. (Additive would be a superclass of Multiplicative.) For the > Naturals perhaps we want Presburger arithmetic then Additive just contains > (+), with `negate` certainly in a different class, perhaps (-) subtract > also in a dedicated class. Also there's people wanting Monads with just > `bind` not `return`. But restructuring the Prelude classes/methods is just > too hard with all that legacy code. Even though you should be able to do: > > > > class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive > a) => Num a > > > > Note there's a lot of classes with a single method, and that seems to be > an increasing trend. Historically it wasn't so easy in Haskell to do that > superclass constraints business; if it had been perhaps there would be more > classes with a single method. Then there's some disadvantages to classes > holding multiple methods: > > * the need to provide an overloading for every method, even though it > may not make sense > > (or suffer a run-time error, as you say) > > * the inability to 'fine tune' methods for a specific datatype [**] > > * an internal compiler/object code cost of passing a group of methods in > a dictionary as tuple > > (as apposed to directly selecting a single method) > > > > [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be > controversial, but ...) Some people want to/some languages do use (+) for > concatenating Strings/lists. But the other methods in `Num` don't make any > sense. > > > > If all your classes have a single method, the class name would seem to > be superfluous, and the class/instance decl syntax seems too verbose. > > > > So here's a suggestion. I'll need to illustrate with some definite > syntax, but there's nothing necessary about it. (I'll borrow the Explicit > Type Application `@`.) To give an instance overloading for method `show` or > (==) > > > > show @Int = primShowInt -- in effect pattern > matching on the type > > (==) @Int = primEqInt -- so see showList below > > That is: I'm giving an overloading for those methods on type `Int`. How > do I declare those methods are overloadable? In their signature: > > > > show @a :: a -> String -- compare show :: Show a => > a -> String > > (==) @a :: a -> a -> Bool > > Non-overladable functions don't have `@a` to the left of `::`. > > How do I show that a class has a superclass constraint? That is: a > method has a supermethod constraint, we'll still use `=>`: > > > > show @a :: showsPrec @a => a -> String -- supermethod constraint > > show @[a] :: show a => [a] -> String -- instance decl, because > not bare a, with constraint => > > show @[a] xss = showList xss > > (*) @a :: (+) @a => a -> a -> a > > > > Is this idea completely off the wall? Take a look at Wadler's original > 1988 memo introducing what became type classes. > > > http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt > > > > It reviews several possible designs, but not all those possibilities > made it into his paper (with Stephen Blott) later in 1988/January 1989. In > particular look at Section 1's 'Simple overloading'. It's what I'm > suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler > rejects this design because of "potential blow-ups". But he should have > pushed the idea a bit further. Perhaps he was scared to allow > function/method names into type signatures? (I've already sneaked that in > above with constraints.) These days Haskell is getting more relaxed about > namespaces: the type `@`pplication exactly allows type names appearing in > terms. So to counter his example, the programmer writes: > > > > square x = x * x -- no explicit signature > given > > square :: (*) @a => a -> a -- signature inferred, > because (*) is overloaded > > rms = sqrt . square -- no explicit signature > > rms :: sqrt @a => a -> a -- signature inferred > > > > Note the inferred signature for `rms` doesn't need `(*) @a` even though > it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` > might also have other supermethods, that amount to `Floating`. > > > > > ... a run-time error results. > > > > > > Does anyone know of a rationale for this choice, since it seems rather > unhaskell-like. > > > > > > If you allow default method implementations (in the class, as Cale > points out), then I guess you have to allow instance decls that don't > mention all the methods. I think there should at least be a warning if > there's no default method. Also beware the default method might have a more > specific signature, which means it can't be applied for some particular > instance. > > > > Altogether, I'd say, the culprit is the strong bias in early Haskell to > bunch methods together into classes. These days with Haskell's richer/more > fine-tuned typeclass features: what do typeclasses do that can't be done > more precisely at method level -- indeed that would _better_ be done at > method level? > > > > > > AntC > > _______________________________________________ > > Haskell-prime mailing list > > Haskell-prime at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blamario at ciktel.net Fri Oct 5 12:20:35 2018 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Fri, 5 Oct 2018 08:20:35 -0400 Subject: Quo vadis? In-Reply-To: References: Message-ID: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> On 2018-10-04 09:41 PM, Anthony Clayden wrote: > > There was no Haskell 2020 meeting this year at ICFP. Sadly, interest > seems to have waned here... > Yes that is sad. So either Haskell 2020 won't happen, or it'll be only > minor tweaks over H2010, as that was over H98. The former seems much more likely, judging by the pace so far.     I hereby propose we formally disband the present Haskell 2020 committee. Our performance has been so dismal that I feel this is the only course of action that gives Haskell 2020 any chance of fruition. A new committee could then be formed with some more dedicated membership. From Henrik.Nilsson at nottingham.ac.uk Fri Oct 5 13:10:26 2018 From: Henrik.Nilsson at nottingham.ac.uk (Henrik Nilsson) Date: Fri, 05 Oct 2018 14:10:26 +0100 Subject: Quo vadis? In-Reply-To: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> Message-ID: <5BB762C2.5070604@exmail.nottingham.ac.uk> Hi, On 10/05/2018 01:20 PM, Mario Blažević wrote: > I hereby propose we formally disband the present Haskell 2020 > committee. Our performance has been so dismal It has. And I should apologise in particular: I've just had far less time than I thought over the past year for a variety of reasons. > that I feel this is the > only course of action that gives Haskell 2020 any chance of fruition. A > new committee could then be formed with some more dedicated membership. I'm less convinced about that, though. I believe those who signed up for H2020 actually are people who believe in the value of an updated standard and has core expertise to make it happen. I can't see how giving up and forming a new group would speed things up or even increase the chance of success. Instead, what about focusing on identifying a couple of things that absolutely would have to be in H2020 to make a new standard worthwhile, like multi-parameter type classes, possibly GADTs, then figure out what else is needed to support that (like what Anthony Clayden sketched), and with that as a basis, find out exactly what technical problems, if any, are hindering progress? If this could be neatly summarized, then we'd actually be in a position to make some progress. /Henrik This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law. From blamario at ciktel.net Fri Oct 5 16:47:26 2018 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Fri, 5 Oct 2018 12:47:26 -0400 Subject: Quo vadis? In-Reply-To: <5BB762C2.5070604@exmail.nottingham.ac.uk> References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: On 2018-10-05 09:10 AM, Henrik Nilsson wrote: > Hi, > > On 10/05/2018 01:20 PM, Mario Blažević wrote: >>      I hereby propose we formally disband the present Haskell 2020 >> committee. Our performance has been so dismal > > It has. > > And I should apologise in particular: I've just had far less time than > I thought over the past year for a variety of reasons. > >> that I feel this is the >> only course of action that gives Haskell 2020 any chance of fruition. A >> new committee could then be formed with some more dedicated membership. > > I'm less convinced about that, though. I believe those who signed up > for H2020 actually are people who believe in the value of an updated > standard and has core expertise to make it happen.     Regarding the beliefs, if we really represent the most zealous group of Haskell enthusiasts, I have to say the community is in deep trouble. I have no evidence, but I can only hope you're wrong.     As for the expertise, my impression is that *everybody* who self-nominated for the committee got accepted. My own self-nomination e-mail [1] explicitly said that > The main reason I'm applying is because I'm afraid that the commitee > might disband like the previous one. If there are enough members > already, feel free to ignore my nomination. Yet I'm in. This was not a high bar to clear. > I can't see how giving up and forming a new group would speed things > up or even > increase the chance of success.     I was kinda hoping for a Simon ex machina, where a few universally-accepted members of the community hand-pick a new committee. Alternatively, we could come up with some stricter criteria for the next committee before we disband but that assumes we can even get a quorum.     Lest I'm suspected of some Machiavellian plot, let me be clear that I refuse to be a part of the next committee, if my proposal should be accepted. Honestly I feel that all members of the present committee with any sense of shame should recuse themselves as well, but that's not up to me. > Instead, what about focusing on identifying a couple of things that > absolutely would have to be in H2020 to make a new standard > worthwhile, like multi-parameter type classes, possibly GADTs, > then figure out what else is needed to support that (like what > Anthony Clayden sketched), and with that as a basis, find out > exactly what technical problems, if any, are hindering progress? > > If this could be neatly summarized, then we'd actually be in a position > to make some progress.     That is much the plan we agreed on over a year ago during ICFP 2018. The activity since then is plain to see. [1] http://mail.haskell.org/pipermail/haskell-prime/2015-September/003939.html From simonpj at microsoft.com Fri Oct 5 17:05:32 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 5 Oct 2018 17:05:32 +0000 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: I think the difficulty has always been in finding enough people who are * Well-informed and well-qualified * Willing to spend the time to standardise language features GHC does not help the situation: it's a de-facto standard, which reduces the incentives to spend time in standardisation. I don’t think we should blame anyone for not wanting to invest this time -- no shame here. It is a very significant commitment, as I know from editing the Haskell 98 report and the incentives are weak. Because of that, I am not very optimistic about finding such a group -- we have been abortively trying for several years. If we want to change that, the first thing is to build a case that greater standardisation is not just an "abstract good" that we all subscribe to, but something whose lack is holding us back. Simon | -----Original Message----- | From: Haskell-prime On Behalf Of | Mario Blaževic | Sent: 05 October 2018 17:47 | To: haskell-prime at haskell.org | Subject: Re: Quo vadis? | | On 2018-10-05 09:10 AM, Henrik Nilsson wrote: | > Hi, | > | > On 10/05/2018 01:20 PM, Mario Blažević wrote: | >>      I hereby propose we formally disband the present Haskell 2020 | >> committee. Our performance has been so dismal | > | > It has. | > | > And I should apologise in particular: I've just had far less time than | > I thought over the past year for a variety of reasons. | > | >> that I feel this is the | >> only course of action that gives Haskell 2020 any chance of fruition. | >> A new committee could then be formed with some more dedicated | membership. | > | > I'm less convinced about that, though. I believe those who signed up | > for H2020 actually are people who believe in the value of an updated | > standard and has core expertise to make it happen. | |     Regarding the beliefs, if we really represent the most zealous group | of Haskell enthusiasts, I have to say the community is in deep trouble. I | have no evidence, but I can only hope you're wrong. | |     As for the expertise, my impression is that *everybody* who self- | nominated for the committee got accepted. My own self-nomination e-mail | [1] explicitly said that | | | > The main reason I'm applying is because I'm afraid that the commitee | > might disband like the previous one. If there are enough members | > already, feel free to ignore my nomination. | | Yet I'm in. This was not a high bar to clear. | | | > I can't see how giving up and forming a new group would speed things | > up or even increase the chance of success. | |     I was kinda hoping for a Simon ex machina, where a few universally- | accepted members of the community hand-pick a new committee. | Alternatively, we could come up with some stricter criteria for the next | committee before we disband but that assumes we can even get a quorum. | |     Lest I'm suspected of some Machiavellian plot, let me be clear that | I refuse to be a part of the next committee, if my proposal should be | accepted. Honestly I feel that all members of the present committee with | any sense of shame should recuse themselves as well, but that's not up to | me. | | | > Instead, what about focusing on identifying a couple of things that | > absolutely would have to be in H2020 to make a new standard | > worthwhile, like multi-parameter type classes, possibly GADTs, then | > figure out what else is needed to support that (like what Anthony | > Clayden sketched), and with that as a basis, find out exactly what | > technical problems, if any, are hindering progress? | > | > If this could be neatly summarized, then we'd actually be in a | > position to make some progress. | |     That is much the plan we agreed on over a year ago during ICFP 2018. | The activity since then is plain to see. | | | [1] | http://mail.haskell.org/pipermail/haskell-prime/2015- | September/003939.html | | _______________________________________________ | Haskell-prime mailing list | Haskell-prime at haskell.org | http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime From petr.mvd at gmail.com Fri Oct 5 20:31:40 2018 From: petr.mvd at gmail.com (=?UTF-8?B?UGV0ciBQdWRsw6Fr?=) Date: Fri, 5 Oct 2018 22:31:40 +0200 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Hi everyone, IIRC one of the arguments against having many separate classes is that a class is not a just set of methods, it's also the relations between them, such as the important laws between `return` and `>>=`. And then for example a class with just `return` doesn't give any information what `return x` means or what should be its properties. That said, one of really painful points of Haskell is that refactoring a hierarchy of type-classes means breaking all the code that implements them. This was also one of the main reasons why reason making Applicative a superclass of Monad took so long. It'd be much nicer to design type-classes in such a way that an implementation doesn't have to really care about the exact hierarchy. The Go language takes a very simple view on this: A type implements an interface if all the methods are implemented, without having to explicitly specify this intent [1]. This looks very nice and clean indeed. But the drawback is that this further decouples type-classes (interfaces) from their laws (like monad laws, monoid laws etc.). For example, in Haskell we could have class (Return m, Bind m) => Monad m where without any methods specified. But instances of `Monad` should be only such types for which `return` and `>>=` satisfy the monad laws. And this would distinguish them from types that have both `Return` and `Bind` instances, but don't satisfy the laws. Unfortunately I'm not sure if there is a good solution for achieving both these directions. [1] https://tour.golang.org/methods/10 Cheers, Petr čt 4. 10. 2018 v 3:56 odesílatel Anthony Clayden < anthony_clayden at clear.net.nz> napsal: > > We are adding classes and instances to Helium. > > > We wondered about the aspect that it is allowed to have a class instance > > > of which not all fields have a piece of code/value associated with them, ... > > > I have a suggestion for that. But first let me understand where you're going with Helium. Are you aiming to slavishly reproduce Haskell's classes/instances, or is this a chance for a rethink? > > > Will you want to include associated types and associated datatypes in the classes? Note those are just syntactic sugar for top-level type families and data families. It does aid readability to put them within the class. > > > I would certainly rethink the current grouping of methods into classes. Number purists have long wanted to split class Num into Additive vs Multiplicative. (Additive would be a superclass of Multiplicative.) For the Naturals perhaps we want Presburger arithmetic then Additive just contains (+), with `negate` certainly in a different class, perhaps (-) subtract also in a dedicated class. Also there's people wanting Monads with just `bind` not `return`. But restructuring the Prelude classes/methods is just too hard with all that legacy code. Even though you should be able to do: > > > class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive a) => Num a > > > Note there's a lot of classes with a single method, and that seems to be an increasing trend. Historically it wasn't so easy in Haskell to do that superclass constraints business; if it had been perhaps there would be more classes with a single method. Then there's some disadvantages to classes holding multiple methods: > > * the need to provide an overloading for every method, even though it may not make sense > > (or suffer a run-time error, as you say) > > * the inability to 'fine tune' methods for a specific datatype [**] > > * an internal compiler/object code cost of passing a group of methods in a dictionary as tuple > > (as apposed to directly selecting a single method) > > > [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be controversial, but ...) Some people want to/some languages do use (+) for concatenating Strings/lists. But the other methods in `Num` don't make any sense. > > > If all your classes have a single method, the class name would seem to be superfluous, and the class/instance decl syntax seems too verbose. > > > So here's a suggestion. I'll need to illustrate with some definite syntax, but there's nothing necessary about it. (I'll borrow the Explicit Type Application `@`.) To give an instance overloading for method `show` or (==) > > > show @Int = primShowInt -- in effect pattern matching on the type > > (==) @Int = primEqInt -- so see showList below > > That is: I'm giving an overloading for those methods on type `Int`. How do I declare those methods are overloadable? In their signature: > > > show @a :: a -> String -- compare show :: Show a => a -> String > > (==) @a :: a -> a -> Bool > > Non-overladable functions don't have `@a` to the left of `::`. > > How do I show that a class has a superclass constraint? That is: a method has a supermethod constraint, we'll still use `=>`: > > > show @a :: showsPrec @a => a -> String -- supermethod constraint > > show @[a] :: show a => [a] -> String -- instance decl, because not bare a, with constraint => > > show @[a] xss = showList xss > > (*) @a :: (+) @a => a -> a -> a > > > Is this idea completely off the wall? Take a look at Wadler's original 1988 memo introducing what became type classes. > http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt > > > It reviews several possible designs, but not all those possibilities made it into his paper (with Stephen Blott) later in 1988/January 1989. In particular look at Section 1's 'Simple overloading'. It's what I'm suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler rejects this design because of "potential blow-ups". But he should have pushed the idea a bit further. Perhaps he was scared to allow function/method names into type signatures? (I've already sneaked that in above with constraints.) These days Haskell is getting more relaxed about namespaces: the type `@`pplication exactly allows type names appearing in terms. So to counter his example, the programmer writes: > > > square x = x * x -- no explicit signature given > > square :: (*) @a => a -> a -- signature inferred, because (*) is overloaded > > rms = sqrt . square -- no explicit signature > > rms :: sqrt @a => a -> a -- signature inferred > > > Note the inferred signature for `rms` doesn't need `(*) @a` even though it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` might also have other supermethods, that amount to `Floating`. > > > > ... a run-time error results. > > > > Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. > > > If you allow default method implementations (in the class, as Cale points > out), then I guess you have to allow instance decls that don't mention all > the methods. I think there should at least be a warning if there's no > default method. Also beware the default method might have a more specific > signature, which means it can't be applied for some particular instance. > > Altogether, I'd say, the culprit is the strong bias in early Haskell to > bunch methods together into classes. These days with Haskell's richer/more > fine-tuned typeclass features: what do typeclasses do that can't be done > more precisely at method level -- indeed that would _better_ be done at > method level? > > > AntC > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony_clayden at clear.net.nz Sat Oct 6 02:33:42 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Sat, 6 Oct 2018 15:33:42 +1300 Subject: A question about run-time errors when class members are undefined In-Reply-To: <1ACE5D80-CE74-46FB-8D99-7B661CD5C3A6@uu.nl> References: <1ACE5D80-CE74-46FB-8D99-7B661CD5C3A6@uu.nl> Message-ID: On Fri, 5 Oct 2018 at 9:00 PM, Jurriaan Hage wrote: > > We first go the slavish route, to provide a basis for changing things > later. Ah. That comment seemed strange, but I've just read up on Helium: you're aiming to provide a beginners' environment for Haskell. Then without type classes, I'm wondering what Helium is doing now for arithmetic or equality-testing or show or read? Do you mean you've somehow 'faked' the Prelude classes, but don't yet allow programmers to declare their own classes/instances? Being able to declare your own datatypes without writing instances for them seems particularly awkward. If Helium essentially supports less than H98, I'm wondering why you didn't start with Hugs, and work on it giving better error messages? I'm finding Hugs very easy to hack; the messages are particularly easy to work with. (OK it's written in C++, but the 'interesting' parts are just function calls, so the host language seems irrelevant.) AntC -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony_clayden at clear.net.nz Sat Oct 6 03:18:42 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Sat, 6 Oct 2018 16:18:42 +1300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: On Sat, 6 Oct 2018 at 9:47 AM, Petr Pudlák wrote: > > IIRC one of the arguments against having many separate classes is that a > class is not a just set of methods, it's also the relations between them, > Hi Petr, I was talking about splitting out Haskell's current class hierarchy as a step towards doing away with classes altogether. If your language insists on methods being held in classes, that's just tedious bureacracy to invent class names. The relations between classes (including between single-method classes) can be captured through superclass constraints. For example, in the Haskell 2010 report class (Eq a, Show a) => Num a where ... such as the important laws between `return` and `>>=`. And then for example > a class with just `return` doesn't give any information what `return x` > means or what should be its properties. > Then make Bind a superclass constraint on `return` (or vice versa, or both ways). Just as the laws for Num's methods are defined in terms of equality x + negate x == fromInteger 0 -- for example Talking about laws is a red herring: you can't declare the laws/the compiler doesn't enforce them or rely on them in any way. Indeed the Lensaholics seem to take pleasure in building lenses that break the (van Laarhoven) laws. > That said, one of really painful points of Haskell is that refactoring a > hierarchy of type-classes means breaking all the code that implements them. > This was also one of the main reasons why reason making Applicative a > superclass of Monad took so long. It'd be much nicer to design type-classes > in such a way that an implementation doesn't have to really care about the > exact hierarchy. > Yes that's what I was saying. Unfortunately for Haskell's Num class, I think it's just too hard. So a new language has an opportunity to avoid that. If OTOH Helium wants to slavishly follow Haskell, I'm wondering what is the point of Helium. With Applicative, IIRC, refactoring had to wait until we got Constraint kinds and type families that could produce them. Would Helium want to put all that into a language aimed at beginners? For example, in Haskell we could have > > class (Return m, Bind m) => Monad m where > > without any methods specified. But instances of `Monad` should be only > such types for which `return` and `>>=` satisfy the monad laws. > First: what does "satisfy the xxx laws" mean? The Haskell report and GHC's Prelude documentation state a bunch of laws; and it's a good discipline to write down laws if you're creating a class; but it's only documentation. Arguably IO, the most commonly used Monad, breaks the Monad laws in rather serious ways because it imposes sequence of execution; and it would be unfit for purpose if it were pure/lazy function application. Then: what do you think a language could do to detect if some instance satisfies the laws? (Even supposing you could declare them.) And this would distinguish them from types that have both `Return` and > `Bind` instances, but don't satisfy the laws. > You could have distinct classes/distinct operators. Oh, but then `do` dotation would break. > Unfortunately I'm not sure if there is a good solution for achieving both > these directions. > I don't think there's any solution for achieving "satisfy the xxx laws". AntC > čt 4. 10. 2018 v 3:56 odesílatel Anthony Clayden < > anthony_clayden at clear.net.nz> napsal: > >> > We are adding classes and instances to Helium. >> >> > We wondered about the aspect that it is allowed to have a class instance >> >> > of which not all fields have a piece of code/value associated with them, ... >> >> >> I have a suggestion for that. But first let me understand where you're going with Helium. Are you aiming to slavishly reproduce Haskell's classes/instances, or is this a chance for a rethink? >> >> >> Will you want to include associated types and associated datatypes in the classes? Note those are just syntactic sugar for top-level type families and data families. It does aid readability to put them within the class. >> >> >> I would certainly rethink the current grouping of methods into classes. Number purists have long wanted to split class Num into Additive vs Multiplicative. (Additive would be a superclass of Multiplicative.) For the Naturals perhaps we want Presburger arithmetic then Additive just contains (+), with `negate` certainly in a different class, perhaps (-) subtract also in a dedicated class. Also there's people wanting Monads with just `bind` not `return`. But restructuring the Prelude classes/methods is just too hard with all that legacy code. Even though you should be able to do: >> >> >> class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive a) => Num a >> >> >> Note there's a lot of classes with a single method, and that seems to be an increasing trend. Historically it wasn't so easy in Haskell to do that superclass constraints business; if it had been perhaps there would be more classes with a single method. Then there's some disadvantages to classes holding multiple methods: >> >> * the need to provide an overloading for every method, even though it may not make sense >> >> (or suffer a run-time error, as you say) >> >> * the inability to 'fine tune' methods for a specific datatype [**] >> >> * an internal compiler/object code cost of passing a group of methods in a dictionary as tuple >> >> (as apposed to directly selecting a single method) >> >> >> [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be controversial, but ...) Some people want to/some languages do use (+) for concatenating Strings/lists. But the other methods in `Num` don't make any sense. >> >> >> If all your classes have a single method, the class name would seem to be superfluous, and the class/instance decl syntax seems too verbose. >> >> >> So here's a suggestion. I'll need to illustrate with some definite syntax, but there's nothing necessary about it. (I'll borrow the Explicit Type Application `@`.) To give an instance overloading for method `show` or (==) >> >> >> show @Int = primShowInt -- in effect pattern matching on the type >> >> (==) @Int = primEqInt -- so see showList below >> >> That is: I'm giving an overloading for those methods on type `Int`. How do I declare those methods are overloadable? In their signature: >> >> >> show @a :: a -> String -- compare show :: Show a => a -> String >> >> (==) @a :: a -> a -> Bool >> >> Non-overladable functions don't have `@a` to the left of `::`. >> >> How do I show that a class has a superclass constraint? That is: a method has a supermethod constraint, we'll still use `=>`: >> >> >> show @a :: showsPrec @a => a -> String -- supermethod constraint >> >> show @[a] :: show a => [a] -> String -- instance decl, because not bare a, with constraint => >> >> show @[a] xss = showList xss >> >> (*) @a :: (+) @a => a -> a -> a >> >> >> Is this idea completely off the wall? Take a look at Wadler's original 1988 memo introducing what became type classes. >> http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt >> >> >> It reviews several possible designs, but not all those possibilities made it into his paper (with Stephen Blott) later in 1988/January 1989. In particular look at Section 1's 'Simple overloading'. It's what I'm suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler rejects this design because of "potential blow-ups". But he should have pushed the idea a bit further. Perhaps he was scared to allow function/method names into type signatures? (I've already sneaked that in above with constraints.) These days Haskell is getting more relaxed about namespaces: the type `@`pplication exactly allows type names appearing in terms. So to counter his example, the programmer writes: >> >> >> square x = x * x -- no explicit signature given >> >> square :: (*) @a => a -> a -- signature inferred, because (*) is overloaded >> >> rms = sqrt . square -- no explicit signature >> >> rms :: sqrt @a => a -> a -- signature inferred >> >> >> Note the inferred signature for `rms` doesn't need `(*) @a` even though it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` might also have other supermethods, that amount to `Floating`. >> >> >> > ... a run-time error results. >> > >> > Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. >> >> >> If you allow default method implementations (in the class, as Cale points >> out), then I guess you have to allow instance decls that don't mention all >> the methods. I think there should at least be a warning if there's no >> default method. Also beware the default method might have a more specific >> signature, which means it can't be applied for some particular instance. >> >> Altogether, I'd say, the culprit is the strong bias in early Haskell to >> bunch methods together into classes. These days with Haskell's richer/more >> fine-tuned typeclass features: what do typeclasses do that can't be done >> more precisely at method level -- indeed that would _better_ be done at >> method level? >> >> >> AntC >> _______________________________________________ >> Haskell-prime mailing list >> Haskell-prime at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flippa at flippac.org Sat Oct 6 03:33:18 2018 From: flippa at flippac.org (Philippa Cowderoy) Date: Sat, 6 Oct 2018 04:33:18 +0100 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: <6b90964e-2f8f-fb46-4860-da767bb98f27@flippac.org> You're implicitly arguing that no language should have support for declaring informal intentions. That's rather more controversial than you might think and it's worth separating out as a subject. The fact you cheerfully talk about making return and bind inherently related via superclass constraints is pretty suggestive. Away from monads, there are a lot of other uses for return-like behaviour that have a different (if often-related) set of laws. Which is exactly why many people want them to be completely separate superclasses of Monad. It's only when they're used to form a monad that those extra laws show up. Which no, Haskell can't enforce, but there's a big difference between "this breaks because seq in a partial language weirds things" and "this would be broken in a total setting too". What happens when I legitimately want both operations but a different set of laws, and don't want my stuff being passed to things that reasonably expect the monad laws to hold? Asking a researcher who's producing actual results "what's the point?" is more than a little inflammatory, too. Helium is not accountable to us. On 06/10/2018 04:18, Anthony Clayden wrote: > > On Sat, 6 Oct 2018 at 9:47 AM, Petr Pudlák > wrote: > > > IIRC one of the arguments against having many separate classes is > that a class is not a just set of methods, it's also the relations > between them, > > > Hi Petr, I was talking about splitting out Haskell's current class > hierarchy as a step towards doing away with classes altogether. If > your language insists on methods being held in classes, that's just > tedious bureacracy to invent class names. > > The relations between classes (including between single-method > classes) can be captured through superclass constraints. For example, > in the Haskell 2010 report > > class (Eq a, Show a) => Num a where ... > > such as the important laws between `return` and `>>=`. And then > for example a class with just `return` doesn't give any > information what `return x` means or what should be its properties. > > > Then make Bind a superclass constraint on `return` (or vice versa, or > both ways). > > Just as the laws for Num's methods are defined in terms of equality > > x + negate x == fromInteger 0          -- for example > > Talking about laws is a red herring: you can't declare the laws/the > compiler doesn't enforce them or rely on them in any way. Indeed the > Lensaholics seem to take pleasure in building lenses that break the > (van Laarhoven) laws. > > > > That said, one of really painful points of Haskell is that > refactoring a hierarchy of type-classes means breaking all the > code that implements them. This was also one of the main reasons > why reason making Applicative a superclass of Monad took so long. > It'd be much nicer to design type-classes in such a way that an > implementation doesn't have to really care about the exact hierarchy. > > > Yes that's what I was saying. Unfortunately for Haskell's Num class, I > think it's just too hard. So a new language has an opportunity to > avoid that. If OTOH Helium wants to slavishly follow Haskell, I'm > wondering what is the point of Helium. > > With Applicative, IIRC, refactoring had to wait until we got > Constraint kinds and type families that could produce them. Would > Helium want to put all that into a language aimed at beginners? > > >  For example, in Haskell we could have > > class (Return m, Bind m) => Monad m where > > without any methods specified. But instances of `Monad` should be > only such types for which `return` and `>>=` satisfy the monad laws. > > > First: what does "satisfy the xxx laws" mean? The Haskell report and > GHC's Prelude documentation state a bunch of laws; and it's a good > discipline to write down laws if you're creating a class; but it's > only documentation. Arguably IO, the most commonly used Monad, breaks > the Monad laws in rather serious ways because it imposes sequence of > execution; and it would be unfit for purpose if it were pure/lazy > function application. > > Then: what do you think a language could do to detect if some instance > satisfies the laws? (Even supposing you could declare them.) > > > And this would distinguish them from types that have both `Return` > and `Bind` instances, but don't satisfy the laws. > > > You could have distinct classes/distinct operators. Oh, but then `do` > dotation would break. > > > Unfortunately I'm not sure if there is a good solution for > achieving both these directions. > > > I don't think there's any solution for achieving "satisfy the xxx laws". > > > AntC > > > čt 4. 10. 2018 v 3:56 odesílatel Anthony Clayden > > napsal: > > > We are adding classes and instances to Helium. > > > We wondered about the aspect that it is allowed to have a class instance > > > of which not all fields have a piece of code/value associated with them, ... > > I have a suggestion for that. But first let me understand where you're going with Helium. Are you aiming to slavishly reproduce Haskell's classes/instances, or is this a chance for a rethink? > > Will you want to include associated types and associated datatypes in the classes? Note those are just syntactic sugar for top-level type families and data families. It does aid readability to put them within the class. > > I would certainly rethink the current grouping of methods into classes. Number purists have long wanted to split class Num into Additive vs Multiplicative. (Additive would be a superclass of Multiplicative.) For the Naturals perhaps we want Presburger arithmetic then Additive just contains (+), with `negate` certainly in a different class, perhaps (-) subtract also in a dedicated class. Also there's people wanting Monads with just `bind` not `return`. But restructuring the Prelude classes/methods is just too hard with all that legacy code. Even though you should be able to do: > > class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive a) => Num a > > Note there's a lot of classes with a single method, and that seems to be an increasing trend. Historically it wasn't so easy in Haskell to do that superclass constraints business; if it had been perhaps there would be more classes with a single method. Then there's some disadvantages to classes holding multiple methods: > > * the need to provide an overloading for every method, even though it may not make sense > > (or suffer a run-time error, as you say) > > * the inability to 'fine tune' methods for a specific datatype [**] > > * an internal compiler/object code cost of passing a group of methods in a dictionary as tuple > > (as apposed to directly selecting a single method) > > [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be controversial, but ...) Some people want to/some languages do use (+) for concatenating Strings/lists. But the other methods in `Num` don't make any sense. > > If all your classes have a single method, the class name would seem to be superfluous, and the class/instance decl syntax seems too verbose. > > So here's a suggestion. I'll need to illustrate with some definite syntax, but there's nothing necessary about it. (I'll borrow the Explicit Type Application `@`.) To give an instance overloading for method `show` or (==) > > show @Int = primShowInt -- in effect pattern matching on the type > > (==) @Int = primEqInt -- so see showList below > > That is: I'm giving an overloading for those methods on type `Int`. How do I declare those methods are overloadable? In their signature: > > show @a :: a -> String -- compare show :: Show a => a -> String > > (==) @a :: a -> a -> Bool > > Non-overladable functions don't have `@a` to the left of `::`. > > How do I show that a class has a superclass constraint? That is: a method has a supermethod constraint, we'll still use `=>`: > > show @a :: showsPrec @a => a -> String -- supermethod constraint > > show @[a] :: show a => [a] -> String -- instance decl, because not bare a, with constraint => > > show @[a] xss = showList xss > > (*) @a :: (+) @a => a -> a -> a > > Is this idea completely off the wall? Take a look at Wadler's original 1988 memo introducing what became type classes. > http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt > > It reviews several possible designs, but not all those possibilities made it into his paper (with Stephen Blott) later in 1988/January 1989. In particular look at Section 1's 'Simple overloading'. It's what I'm suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler rejects this design because of "potential blow-ups". But he should have pushed the idea a bit further. Perhaps he was scared to allow function/method names into type signatures? (I've already sneaked that in above with constraints.) These days Haskell is getting more relaxed about namespaces: the type `@`pplication exactly allows type names appearing in terms. So to counter his example, the programmer writes: > > square x = x * x -- no explicit signature given > > square :: (*) @a => a -> a -- signature inferred, because (*) is overloaded > > rms = sqrt . square -- no explicit signature > > rms :: sqrt @a => a -> a -- signature inferred > > Note the inferred signature for `rms` doesn't need `(*) @a` even though it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` might also have other supermethods, that amount to `Floating`. > > > ... a run-time error results. > > > > Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. > > > If you allow default method implementations (in the class, as > Cale points out), then I guess you have to allow instance > decls that don't mention all the methods. I think there should > at least be a warning if there's no default method. Also > beware the default method might have a more specific > signature, which means it can't be applied for some particular > instance. > > Altogether, I'd say, the culprit is the strong bias in early > Haskell to bunch methods together into classes. These days > with Haskell's richer/more fine-tuned typeclass features: what > do typeclasses do that can't be done more precisely at method > level -- indeed that would _better_ be done at method level? > > > AntC > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > > > > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Oct 6 22:54:53 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 6 Oct 2018 18:54:53 -0400 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: agreed... i think theres still room for the current for the current committee to succeed (though depending on ambitions it should maybe slide into being 2022 standard perhaps?) I cant speak for other members, but i'm still hopeful about putting together some of the language improvements to the standard in time for 2020, life and other commitments (in addition to living) permitting. I do think that its ultimately a social / communal activity, and humans are best motivated when thats in the clear. So the more progress some folks make, the more motivated one way or another other folks will be! On Fri, Oct 5, 2018 at 1:05 PM Simon Peyton Jones via Haskell-prime < haskell-prime at haskell.org> wrote: > I think the difficulty has always been in finding enough people who are > > * Well-informed and well-qualified > * Willing to spend the time to standardise language features > > GHC does not help the situation: it's a de-facto standard, which reduces > the incentives to spend time in standardisation. > > I don’t think we should blame anyone for not wanting to invest this time > -- no shame here. It is a very significant commitment, as I know from > editing the Haskell 98 report and the incentives are weak. Because of > that, I am not very optimistic about finding such a group -- we have been > abortively trying for several years. > > If we want to change that, the first thing is to build a case that greater > standardisation is not just an "abstract good" that we all subscribe to, > but something whose lack is holding us back. > > Simon > > | -----Original Message----- > | From: Haskell-prime On Behalf Of > | Mario Blaževic > | Sent: 05 October 2018 17:47 > | To: haskell-prime at haskell.org > | Subject: Re: Quo vadis? > | > | On 2018-10-05 09:10 AM, Henrik Nilsson wrote: > | > Hi, > | > > | > On 10/05/2018 01:20 PM, Mario Blažević wrote: > | >> I hereby propose we formally disband the present Haskell 2020 > | >> committee. Our performance has been so dismal > | > > | > It has. > | > > | > And I should apologise in particular: I've just had far less time than > | > I thought over the past year for a variety of reasons. > | > > | >> that I feel this is the > | >> only course of action that gives Haskell 2020 any chance of fruition. > | >> A new committee could then be formed with some more dedicated > | membership. > | > > | > I'm less convinced about that, though. I believe those who signed up > | > for H2020 actually are people who believe in the value of an updated > | > standard and has core expertise to make it happen. > | > | Regarding the beliefs, if we really represent the most zealous > group > | of Haskell enthusiasts, I have to say the community is in deep trouble. > I > | have no evidence, but I can only hope you're wrong. > | > | As for the expertise, my impression is that *everybody* who self- > | nominated for the committee got accepted. My own self-nomination e-mail > | [1] explicitly said that > | > | > | > The main reason I'm applying is because I'm afraid that the commitee > | > might disband like the previous one. If there are enough members > | > already, feel free to ignore my nomination. > | > | Yet I'm in. This was not a high bar to clear. > | > | > | > I can't see how giving up and forming a new group would speed things > | > up or even increase the chance of success. > | > | I was kinda hoping for a Simon ex machina, where a few universally- > | accepted members of the community hand-pick a new committee. > | Alternatively, we could come up with some stricter criteria for the next > | committee before we disband but that assumes we can even get a quorum. > | > | Lest I'm suspected of some Machiavellian plot, let me be clear that > | I refuse to be a part of the next committee, if my proposal should be > | accepted. Honestly I feel that all members of the present committee with > | any sense of shame should recuse themselves as well, but that's not up > to > | me. > | > | > | > Instead, what about focusing on identifying a couple of things that > | > absolutely would have to be in H2020 to make a new standard > | > worthwhile, like multi-parameter type classes, possibly GADTs, then > | > figure out what else is needed to support that (like what Anthony > | > Clayden sketched), and with that as a basis, find out exactly what > | > technical problems, if any, are hindering progress? > | > > | > If this could be neatly summarized, then we'd actually be in a > | > position to make some progress. > | > | That is much the plan we agreed on over a year ago during ICFP > 2018. > | The activity since then is plain to see. > | > | > | [1] > | http://mail.haskell.org/pipermail/haskell-prime/2015- > | September/003939.html > | > | _______________________________________________ > | Haskell-prime mailing list > | Haskell-prime at haskell.org > | http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blamario at ciktel.net Mon Oct 8 01:51:58 2018 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Sun, 7 Oct 2018 21:51:58 -0400 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: On 2018-10-05 01:05 PM, Simon Peyton Jones wrote: > I think the difficulty has always been in finding enough people who are > > * Well-informed and well-qualified > * Willing to spend the time to standardise language features > > GHC does not help the situation: it's a de-facto standard, which reduces the incentives to spend time in standardisation. > > I don’t think we should blame anyone for not wanting to invest this time -- no shame here. It is a very significant commitment, as I know from editing the Haskell 98 report and the incentives are weak. Because of that, I am not very optimistic about finding such a group -- we have been abortively trying for several years. That sounds like we're stuck with the committee we have. In that case, Simon, could you at least pull some strings to have the actual Haskell Report placed in the same repository? This is a basic precondition if we expect individual efforts to accomplish anything. The minimal steps to actually updating the Haskell Report are: 1. write an RFC (we have some already), 2. have it provisionally accepted (not entirely clear how - would    "no negative votes in 4 weeks" count?), 3. add the modification to the Haskell Report to the RFC, 4. receive the final approval, 5. merge the RFC into the report. Steps #3 and #5 depend on having the report in the same repository with the RFCs. This has been agreed over a year ago: https://mail.haskell.org/pipermail/haskell-prime/2017-September/004319.html https://mail.haskell.org/pipermail/haskell-prime/2017-October/thread.html https://mail.haskell.org/pipermail/haskell-prime/2017-November/thread.html https://mail.haskell.org/pipermail/haskell-prime/2018-March/004356.html > If we want to change that, the first thing is to build a case that greater standardisation is not just an "abstract good" that we all subscribe to, but something whose lack is holding us back. Neither an abstract good nor a good abstraction are something Haskell has ever shied away from. I don't know if you're actually asking for a list of "concrete goods"? To start with, every GHC extension that's added to a standard means: - one less item to type in the ubiquitous {-# LANGUAGE ScaryExtension #-} pragma, - one less item to understand for beginners, - one less item whose necessity must be justified to the team, and - one less item of whose future stability the management needs to be convinced. I could go on. From flippa at flippac.org Mon Oct 8 03:32:16 2018 From: flippa at flippac.org (Philippa Cowderoy) Date: Mon, 8 Oct 2018 04:32:16 +0100 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: <48c4c884-24bd-3247-c0ef-4827f1e4ee35@flippac.org> On 08/10/2018 02:51, Mario Blažević wrote: > > Neither an abstract good nor a good abstraction are something Haskell > has ever shied away from. I don't know if you're actually asking for a > list of "concrete goods"? To start with, every GHC extension that's > added to a standard means: > > - one less item to type in the ubiquitous {-# LANGUAGE ScaryExtension > #-} pragma, > - one less item to understand for beginners, > - one less item whose necessity must be justified to the team, and > - one less item of whose future stability the management needs to be > convinced. > I suspect we need to follow the lead of other languages here and accept that the LANGUAGE pragma is actually a necessity and a positive good for engineering in the presence of changing language versions. That would mean we should support more standardised pragmas in the vein of the existing ones in the report, and perhaps that GHC should give more information about the stability of extensions. Perhaps when an extension was first introduced and the GHC versions in which the last two changes more significant than "bug fix" happened? There might even be a need for versioning of extensions. I'd be remiss if I didn't suggest a candidate with a specific problem, a specific goal and a possible solution to its problem. So, a modest proposal: - Standardise OverloadedStrings as an available-but-disabled feature - Allow default statements for the IsString class without OverloadedStrings, using that type for all string literals - At some future stage, we can use this to migrate away from [Char] as the default string literal type - The Haskell2010 pragma and its successors can be used to ensure code written to standard doesn't suffer bit rot when migration happens From flippa at flippac.org Mon Oct 8 03:34:32 2018 From: flippa at flippac.org (Philippa Cowderoy) Date: Mon, 8 Oct 2018 04:34:32 +0100 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: <9719415d-6156-925f-27e5-9831228a33c8@flippac.org> On 05/10/2018 18:05, Simon Peyton Jones via Haskell-prime wrote: > If we want to change that, the first thing is to build a case that greater standardisation is not just an "abstract good" that we all subscribe to, but something whose lack is holding us back. To pick an example, I'm left wondering if we can achieve a minimal GADT specification that doesn't have to stay too stable in the presence of extensions. Changes in its behaviour would need documenting though and documenting the behaviour of inference is notoriously difficult at present. While I have some ideas about documenting inference, I remain as infamously low on energy as ever - I'm not up to trying it with Haskell2010, let alone GHC, and I wouldn't want to make a business case for someone else trying it yet! I think it's a problem that sooner or later standardised Haskell will need to address though: we're a long way past the "Hindley-Milner plus simple, well-behaved constraints that don't need annotations" approach that typeclasses helped push the limits of. From gershomb at gmail.com Mon Oct 8 03:36:37 2018 From: gershomb at gmail.com (Gershom B) Date: Sun, 7 Oct 2018 23:36:37 -0400 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: Mario: as a non-committee member but interested observer, if you yourself wanted to proceed to put the report in the repo, what obstacles would stand in your way, and could we clear them out so you could take charge of that task? Cheers, Gershom On October 7, 2018 at 9:52:14 PM, Mario Blažević (blamario at ciktel.net) wrote: On 2018-10-05 01:05 PM, Simon Peyton Jones wrote: > I think the difficulty has always been in finding enough people who are > > * Well-informed and well-qualified > * Willing to spend the time to standardise language features > > GHC does not help the situation: it's a de-facto standard, which reduces the incentives to spend time in standardisation. > > I don’t think we should blame anyone for not wanting to invest this time -- no shame here. It is a very significant commitment, as I know from editing the Haskell 98 report and the incentives are weak. Because of that, I am not very optimistic about finding such a group -- we have been abortively trying for several years. That sounds like we're stuck with the committee we have. In that case, Simon, could you at least pull some strings to have the actual Haskell Report placed in the same repository? This is a basic precondition if we expect individual efforts to accomplish anything. The minimal steps to actually updating the Haskell Report are: 1. write an RFC (we have some already), 2. have it provisionally accepted (not entirely clear how - would    "no negative votes in 4 weeks" count?), 3. add the modification to the Haskell Report to the RFC, 4. receive the final approval, 5. merge the RFC into the report. Steps #3 and #5 depend on having the report in the same repository with the RFCs. This has been agreed over a year ago: https://mail.haskell.org/pipermail/haskell-prime/2017-September/004319.html https://mail.haskell.org/pipermail/haskell-prime/2017-October/thread.html https://mail.haskell.org/pipermail/haskell-prime/2017-November/thread.html https://mail.haskell.org/pipermail/haskell-prime/2018-March/004356.html > If we want to change that, the first thing is to build a case that greater standardisation is not just an "abstract good" that we all subscribe to, but something whose lack is holding us back. Neither an abstract good nor a good abstraction are something Haskell has ever shied away from. I don't know if you're actually asking for a list of "concrete goods"? To start with, every GHC extension that's added to a standard means: - one less item to type in the ubiquitous {-# LANGUAGE ScaryExtension #-} pragma, - one less item to understand for beginners, - one less item whose necessity must be justified to the team, and - one less item of whose future stability the management needs to be convinced. I could go on. _______________________________________________ Haskell-prime mailing list Haskell-prime at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 8 08:13:48 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 8 Oct 2018 08:13:48 +0000 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: | That sounds like we're stuck with the committee we have. In that case, | Simon, could you at least pull some strings to have the actual Haskell | Report placed in the same repository? Sounds like a good plan. If the haskell-prime committee agreed to do this, and it's only a matter of doing it, then you just need someone with commit rights to the relevant repository. I don't know who that is (it certainly isn't me), but if you make them a PR, and ping them by email, it would be easy for them to execute. Simon | -----Original Message----- | From: Mario Blažević | Sent: 08 October 2018 02:52 | To: Simon Peyton Jones ; haskell-prime at haskell.org | Subject: Re: Quo vadis? | | On 2018-10-05 01:05 PM, Simon Peyton Jones wrote: | > I think the difficulty has always been in finding enough people who | > are | > | > * Well-informed and well-qualified | > * Willing to spend the time to standardise language features | > | > GHC does not help the situation: it's a de-facto standard, which | reduces the incentives to spend time in standardisation. | > | > I don’t think we should blame anyone for not wanting to invest this | time -- no shame here. It is a very significant commitment, as I know | from editing the Haskell 98 report and the incentives are weak. Because | of that, I am not very optimistic about finding such a group -- we have | been abortively trying for several years. | | | That sounds like we're stuck with the committee we have. In that case, | Simon, could you at least pull some strings to have the actual Haskell | Report placed in the same repository? This is a basic precondition if we | expect individual efforts to accomplish anything. The minimal steps to | actually updating the Haskell Report are: | | 1. write an RFC (we have some already), | 2. have it provisionally accepted (not entirely clear how - would |    "no negative votes in 4 weeks" count?), 3. add the modification to | the Haskell Report to the RFC, 4. receive the final approval, 5. merge | the RFC into the report. | | Steps #3 and #5 depend on having the report in the same repository with | the RFCs. This has been agreed over a year ago: | | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has | kell.org%2Fpipermail%2Fhaskell-prime%2F2017- | September%2F004319.html&data=02%7C01%7Csimonpj%40microsoft.com%7C227f | 843099c5489509da08d62cc0a25f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7 | C636745603204766102&sdata=z3meiZAXQoKzsiOzPAjicdzLbL2vRp0NPgIsUFM2h%2 | FY%3D&reserved=0 | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has | kell.org%2Fpipermail%2Fhaskell-prime%2F2017- | October%2Fthread.html&data=02%7C01%7Csimonpj%40microsoft.com%7C227f84 | 3099c5489509da08d62cc0a25f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6 | 36745603204766102&sdata=ilw5EXJyblsVyqs3e7iczbTpG3TexjNY7nmSokMJFvM%3 | D&reserved=0 | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has | kell.org%2Fpipermail%2Fhaskell-prime%2F2017- | November%2Fthread.html&data=02%7C01%7Csimonpj%40microsoft.com%7C227f8 | 43099c5489509da08d62cc0a25f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C | 636745603204766102&sdata=T5zS7b9Swyn%2FWPW8Yqt9XTOf38KSqYmMkgzglesjAR | Y%3D&reserved=0 | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has | kell.org%2Fpipermail%2Fhaskell-prime%2F2018- | March%2F004356.html&data=02%7C01%7Csimonpj%40microsoft.com%7C227f8430 | 99c5489509da08d62cc0a25f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636 | 745603204766102&sdata=bSimqVnSL0Yp18LhYMJ9LsqnPWT4QmT%2BKpyRwAISbdY%3 | D&reserved=0 | | | > If we want to change that, the first thing is to build a case that | greater standardisation is not just an "abstract good" that we all | subscribe to, but something whose lack is holding us back. | | Neither an abstract good nor a good abstraction are something Haskell has | ever shied away from. I don't know if you're actually asking for a list | of "concrete goods"? To start with, every GHC extension that's added to a | standard means: | | - one less item to type in the ubiquitous {-# LANGUAGE ScaryExtension #-} | pragma, | - one less item to understand for beginners, | - one less item whose necessity must be justified to the team, and | - one less item of whose future stability the management needs to be | convinced. | | I could go on. From anthony_clayden at clear.net.nz Mon Oct 8 09:21:16 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Mon, 8 Oct 2018 22:21:16 +1300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote: You may be interested in Carlos Camarao’s interesting work. For a long > time now he has advocated (in effect) making each function into its own > type class, rather that grouping them into classes. > No I think you're mis-apprehending. From the abstract to the group's SBLP2016 paper: "This depends on a modularization of instance visibility, as well as on a redefinition of Haskell’s ambiguity rule." You might remember early last year Carlos submitted a proposal (in two rounds). Your comments were very relevant https://github.com/ghc-proposals/ghc-proposals/pull/48#issuecomment-287124007 Relevant because not just was it difficult to understand the proposal, the proposal had no answer to how instance resolution was to behave. "expression ambiguity" turned out to mean: use module scope to resolve overloading. In the second round of the proposal and in an extended email exchange off-forum with (I think it was) Rodrigo Ribeiro in Carlos' group I tried to tease out how module-scoped instances were going to work for a method exported to a module where there was a different instance in scope. Of course 'orphan instances' are the familiar symptom in GHC. Wadler & Blott's 1988 paper last paragraph had already explained: "But there is no principal type! " Perhaps that is in line with your thinking. > Not at all. My thinking is coming directly from Wadler's early 1988 memo that I referenced (note *not* the W&B paper) + using some of GHC's more recent features like explicit type application in terms; and its counterpart: explicit method application in types. I wonder how different would have been the history of Haskell if Wadler had not borrowed the terminology "class" and "method". Since Helium has a focus on Haskell learners/beginners: I wonder how much confusion we might have saved those coming from OOP where the terms mean something really quite different. We might have avoided "class" altogether; and talked of "overloaded function". AntC *From:* Haskell-prime *On Behalf Of *Anthony > Clayden > *Sent:* 06 October 2018 04:19 > *To:* Petr Pudlák > *Cc:* haskell-prime at haskell.org > *Subject:* Re: A question about run-time errors when class members are > undefined > > > > > > On Sat, 6 Oct 2018 at 9:47 AM, Petr Pudlák > wrote: > > > > IIRC one of the arguments against having many separate classes is that a > class is not a just set of methods, it's also the relations between them, > > > > Hi Petr, I was talking about splitting out Haskell's current class > hierarchy as a step towards doing away with classes altogether. If your > language insists on methods being held in classes, that's just tedious > bureacracy to invent class names. > > > > The relations between classes (including between single-method classes) > can be captured through superclass constraints. For example, in the Haskell > 2010 report > > > > class (Eq a, Show a) => Num a where ... > > > > such as the important laws between `return` and `>>=`. And then for > example a class with just `return` doesn't give any information what > `return x` means or what should be its properties. > > > > Then make Bind a superclass constraint on `return` (or vice versa, or both > ways). > > > > Just as the laws for Num's methods are defined in terms of equality > > > > x + negate x == fromInteger 0 -- for example > > > > Talking about laws is a red herring: you can't declare the laws/the > compiler doesn't enforce them or rely on them in any way. Indeed the > Lensaholics seem to take pleasure in building lenses that break the (van > Laarhoven) laws. > > > > > > > > That said, one of really painful points of Haskell is that refactoring a > hierarchy of type-classes means breaking all the code that implements them. > This was also one of the main reasons why reason making Applicative a > superclass of Monad took so long. It'd be much nicer to design type-classes > in such a way that an implementation doesn't have to really care about the > exact hierarchy. > > > > Yes that's what I was saying. Unfortunately for Haskell's Num class, I > think it's just too hard. So a new language has an opportunity to avoid > that. If OTOH Helium wants to slavishly follow Haskell, I'm wondering what > is the point of Helium. > > > > With Applicative, IIRC, refactoring had to wait until we got Constraint > kinds and type families that could produce them. Would Helium want to put > all that into a language aimed at beginners? > > > > > > For example, in Haskell we could have > > > > class (Return m, Bind m) => Monad m where > > > > without any methods specified. But instances of `Monad` should be only > such types for which `return` and `>>=` satisfy the monad laws. > > > > First: what does "satisfy the xxx laws" mean? The Haskell report and GHC's > Prelude documentation state a bunch of laws; and it's a good discipline to > write down laws if you're creating a class; but it's only documentation. > Arguably IO, the most commonly used Monad, breaks the Monad laws in rather > serious ways because it imposes sequence of execution; and it would be > unfit for purpose if it were pure/lazy function application. > > > > Then: what do you think a language could do to detect if some instance > satisfies the laws? (Even supposing you could declare them.) > > > > > > And this would distinguish them from types that have both `Return` and > `Bind` instances, but don't satisfy the laws. > > > > You could have distinct classes/distinct operators. Oh, but then `do` > dotation would break. > > > > > > Unfortunately I'm not sure if there is a good solution for achieving both > these directions. > > > > I don't think there's any solution for achieving "satisfy the xxx laws". > > > > > > AntC > > > > > > čt 4. 10. 2018 v 3:56 odesílatel Anthony Clayden < > anthony_clayden at clear.net.nz> napsal: > > > We are adding classes and instances to Helium. > > > We wondered about the aspect that it is allowed to have a class instance > > > of which not all fields have a piece of code/value associated with them, ... > > > > I have a suggestion for that. But first let me understand where you're going with Helium. Are you aiming to slavishly reproduce Haskell's classes/instances, or is this a chance for a rethink? > > > > Will you want to include associated types and associated datatypes in the classes? Note those are just syntactic sugar for top-level type families and data families. It does aid readability to put them within the class. > > > > I would certainly rethink the current grouping of methods into classes. Number purists have long wanted to split class Num into Additive vs Multiplicative. (Additive would be a superclass of Multiplicative.) For the Naturals perhaps we want Presburger arithmetic then Additive just contains (+), with `negate` certainly in a different class, perhaps (-) subtract also in a dedicated class. Also there's people wanting Monads with just `bind` not `return`. But restructuring the Prelude classes/methods is just too hard with all that legacy code. Even though you should be able to do: > > > > class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive a) => Num a > > > > Note there's a lot of classes with a single method, and that seems to be an increasing trend. Historically it wasn't so easy in Haskell to do that superclass constraints business; if it had been perhaps there would be more classes with a single method. Then there's some disadvantages to classes holding multiple methods: > > * the need to provide an overloading for every method, even though it may not make sense > > (or suffer a run-time error, as you say) > > * the inability to 'fine tune' methods for a specific datatype [**] > > * an internal compiler/object code cost of passing a group of methods in a dictionary as tuple > > (as apposed to directly selecting a single method) > > > > [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be controversial, but ...) Some people want to/some languages do use (+) for concatenating Strings/lists. But the other methods in `Num` don't make any sense. > > > > If all your classes have a single method, the class name would seem to be superfluous, and the class/instance decl syntax seems too verbose. > > > > So here's a suggestion. I'll need to illustrate with some definite syntax, but there's nothing necessary about it. (I'll borrow the Explicit Type Application `@`.) To give an instance overloading for method `show` or (==) > > > > show @Int = primShowInt -- in effect pattern matching on the type > > (==) @Int = primEqInt -- so see showList below > > That is: I'm giving an overloading for those methods on type `Int`. How do I declare those methods are overloadable? In their signature: > > > > show @a :: a -> String -- compare show :: Show a => a -> String > > (==) @a :: a -> a -> Bool > > Non-overladable functions don't have `@a` to the left of `::`. > > How do I show that a class has a superclass constraint? That is: a method has a supermethod constraint, we'll still use `=>`: > > > > show @a :: showsPrec @a => a -> String -- supermethod constraint > > show @[a] :: show a => [a] -> String -- instance decl, because not bare a, with constraint => > > show @[a] xss = showList xss > > (*) @a :: (+) @a => a -> a -> a > > > > Is this idea completely off the wall? Take a look at Wadler's original 1988 memo introducing what became type classes. > > http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt > > > > It reviews several possible designs, but not all those possibilities made it into his paper (with Stephen Blott) later in 1988/January 1989. In particular look at Section 1's 'Simple overloading'. It's what I'm suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler rejects this design because of "potential blow-ups". But he should have pushed the idea a bit further. Perhaps he was scared to allow function/method names into type signatures? (I've already sneaked that in above with constraints.) These days Haskell is getting more relaxed about namespaces: the type `@`pplication exactly allows type names appearing in terms. So to counter his example, the programmer writes: > > > > square x = x * x -- no explicit signature given > > square :: (*) @a => a -> a -- signature inferred, because (*) is overloaded > > rms = sqrt . square -- no explicit signature > > rms :: sqrt @a => a -> a -- signature inferred > > > > Note the inferred signature for `rms` doesn't need `(*) @a` even though it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` might also have other supermethods, that amount to `Floating`. > > > > > ... a run-time error results. > > > > > > Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. > > > > If you allow default method implementations (in the class, as Cale points > out), then I guess you have to allow instance decls that don't mention all > the methods. I think there should at least be a warning if there's no > default method. Also beware the default method might have a more specific > signature, which means it can't be applied for some particular instance. > > > > Altogether, I'd say, the culprit is the strong bias in early Haskell to > bunch methods together into classes. These days with Haskell's richer/more > fine-tuned typeclass features: what do typeclasses do that can't be done > more precisely at method level -- indeed that would _better_ be done at > method level? > > > > > > AntC > > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From camarao at dcc.ufmg.br Mon Oct 8 18:20:20 2018 From: camarao at dcc.ufmg.br (camarao at dcc.ufmg.br) Date: Mon, 08 Oct 2018 15:20:20 -0300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Em 2018-10-08 06:21, Anthony Clayden escreveu: > On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote: > >> You may be interested in Carlos Camarao’s interesting work. For a >> long time now he has advocated (in effect) making each function into >> its own type class, rather that grouping them into classes. > > No I think you're mis-apprehending. From the abstract to the group's > SBLP2016 paper: "This depends on a modularization of instance > visibility, as well as on a redefinition of Haskell’s ambiguity > rule." Hi. I wrote this to mean: "This depends *only* on modularization of instances and a redefinition of Haskell's ambiguity rule" (i.e. no extra mechanism is necessary and all well-typed Haskell programs remain well-typed). Haskell's ambiguity rule is not ok: a type is not ambiguous, it is an expression that is ambiguous, depending on the context where it is used. Global instance scope is not ok either: instances should be modular. > You might remember early last year Carlos submitted a proposal (in two > rounds). Your comments were very relevant > > https://github.com/ghc-proposals/ghc-proposals/pull/48#issuecomment-287124007 > > Relevant because not just was it difficult to understand the proposal, > the proposal had no answer to how instance resolution was to behave. > "expression ambiguity" turned out to mean: use module scope to resolve > overloading. It is difficut until understanding how simple it really is. The crucial notion of "expression ambiguity" is "overloading resolution" (or: the decision of "overloading is resolved"), based on the existence of "unreachable variables": if and only if there are unreachable variables, satisfiability must be tested for the constraints with unreachable variables. Instance modular scope is secondary. > In the second round of the proposal and in an extended email exchange > off-forum with (I think it was) Rodrigo Ribeiro in Carlos' group I > tried to tease out how module-scoped instances were going to work for > a method exported to a module where there was a different instance in > scope. Of course 'orphan instances' are the familiar symptom in GHC. > > Wadler & Blott's 1988 paper last paragraph had already explained: "But > there is no principal type! " There is always a principal type, for every expression. Of course the type depends on the context where the expression occurs. >> Perhaps that is in line with your thinking. > > Not at all. My thinking is coming directly from Wadler's early 1988 > memo that I referenced (note *not* the W&B paper) + using some of > GHC's more recent features like explicit type application in terms; > and its counterpart: explicit method application in types. Again: the proposal does not need any extra mechanism, just a change to the ambiguity rule and instance modular scope. It would be possible even to maintain instances as global, but in my view this should not be done (it is better to have modular instances). > I wonder how different would have been the history of Haskell if > Wadler had not borrowed the terminology "class" and "method". Since > Helium has a focus on Haskell learners/beginners: I wonder how much > confusion we might have saved those coming from OOP where the terms > mean something really quite different. We might have avoided "class" > altogether; and talked of "overloaded function". This is another matter, that does not need to be discussed now: we can avoid type classes, or we can have type classes as optional, but this discussion can be done later. Kind regards, Carlos From blamario at ciktel.net Mon Oct 8 18:53:14 2018 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Mon, 8 Oct 2018 14:53:14 -0400 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> Message-ID: <4151c97f-fa4b-7e16-3ecd-024907ca5dad@ciktel.net> On 2018-10-07 11:36 PM, Gershom B wrote: > Mario: as a non-committee member but interested observer, if you > yourself wanted to proceed to put the report in the repo, what > obstacles would stand in your way, and could we clear them out so you > could take charge of that task? My understanding is that the canonical home of the report is https://github.com/haskell/haskell-report. Can somebody with the knowledge confirm this? If so, I (or anybody else willing) can either: 1. submit a simple but rather large pull request that dumps the entire source of the report into the https://github.com/haskell/rfcs/ repository, 2. use git subtree to add a fork of the report with full history to the https://github.com/haskell/rfcs/ repository, or 3. use a Git submodule to host a fork of the report with full history. I'm against option #3 because it would complicate the work with new proposals, and in this situation we can't add anything that de-motivates the potential contributors. From their perspective, options #1 and #2 are indistinguishable but #2 should be easier to merge back into the canonical report whenever Haskell2020(+n) finally becomes official. So #2 would be my choice. Is anybody out there against this plan? It's worth a mention that I would not actually merge the PR before giving another chance to everybody to try the fork, but the plan is to merge it into the master before we proceed with new proposal PRs. The existing proposals, once accepted, would need to be refreshed from the master by their authors. From anthony_clayden at clear.net.nz Mon Oct 8 23:02:45 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Tue, 9 Oct 2018 12:02:45 +1300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: On Tue, 9 Oct 2018 at 7:30 AM, wrote: Thanks Carlos. I wish I could say thank you for clarifying, but I'm afraid this is as muddled as all the comments on the two proposals. I don't want to go over it again. I just want to say that my suggestion earlier in the thread is fundamentally different. Em 2018-10-08 06:21, Anthony Clayden escreveu: > > On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote: > > Strange: Simon's message has not appeared on the forum (he did send to it). I've quoted it in full in my reply, but did break it into separate pieces. > > Global instance scope is not ok either: instances should be modular. I just plain disagree. Fundamentally. > > > > Wadler & Blott's 1988 paper last paragraph had already explained: "But > > there is no principal type! " > > There is always a principal type, for every expression. > Of course the type depends on the context where the expression occurs. Then it's not a _principal_ type for the expression, it's just a local type. http://foldoc.org/principal We arrive at the principal type by unifying the principal types of the sub-expressions, down to the principal types of each atom. W&B are pointing out that without global scope for instances, typing cannot assign a principal type to each method. (They left that as an open problem at the end of the paper. Haskell has resolved that problem by making all instances global. Changing Haskell to modular instances would be a breakage. Fundamentally.) Under my suggestion, we can assign a (global) principal type to each method -- indeed you must, by giving a signature very similar to a class declaration; and that distinguishes overloaded functions from parametric polymorphic functions. AntC -------------- next part -------------- An HTML attachment was scrubbed... URL: From blamario at ciktel.net Mon Oct 8 23:58:27 2018 From: blamario at ciktel.net (=?UTF-8?Q?Mario_Bla=c5=beevi=c4=87?=) Date: Mon, 8 Oct 2018 19:58:27 -0400 Subject: Quo vadis? In-Reply-To: <48c4c884-24bd-3247-c0ef-4827f1e4ee35@flippac.org> References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> <48c4c884-24bd-3247-c0ef-4827f1e4ee35@flippac.org> Message-ID: On 2018-10-07 11:32 PM, Philippa Cowderoy wrote: > > I'd be remiss if I didn't suggest a candidate with a specific problem, > a specific goal and a possible solution to its problem. So, a modest > proposal: > > - Standardise OverloadedStrings as an available-but-disabled feature > - Allow default statements for the IsString class without > OverloadedStrings, using that type for all string literals > - At some future stage, we can use this to migrate away from [Char] as > the default string literal type > - The Haskell2010 pragma and its successors can be used to ensure code > written to standard doesn't suffer bit rot when migration happens The second bullet point could use some clarification. Would you mind commenting on the existing defaulting proposal at https://github.com/haskell/rfcs/pull/18 ? From carlos.camarao at gmail.com Tue Oct 9 00:46:37 2018 From: carlos.camarao at gmail.com (Carlos Camarao) Date: Mon, 8 Oct 2018 21:46:37 -0300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Hi. > Thanks Carlos. I wish I could say thank you for clarifying, but I'm > afraid this is as muddled as all the comments on the two proposals. > > I don't want to go over it again. I just want to say that my > suggestion earlier in the thread is fundamentally different. > >> Global instance scope is not ok either: instances should be modular. > I just plain disagree. Fundamentally. Global instance scope is not required for principal typing: a principal type is (just) a type of an expression in a given typing context that has all other types of this expression in that typing context as instances. (Also: instance modularity is not the central issue.) >>> Wadler & Blott's 1988 paper last paragraph had already explained: "But >>> there is no principal type! " >> There is always a principal type, for every expression. >> Of course the type depends on the context where the expression occurs. > Then it's not a _principal_ type for the expression, it's just a local type. > http://foldoc.org/principal A type system has the principal type property if, given a term and a typing context, there exists a type for this term in this typing context such that all other types for this term in this typing context are an instance of this type. > We arrive at the principal type by unifying the principal types of > the sub-expressions, down to the principal types of each atom. W&B > are pointing out that without global scope for instances, typing > cannot assign a principal type to each method. (They left that as an > open problem at the end of the paper. Haskell has resolved that > problem by making all instances global. Changing Haskell to modular > instances would be a breakage. Fundamentally.) > > Under my suggestion, we can assign a (global) principal type to each > method -- indeed you must, by giving a signature very similar to a > class declaration; and that distinguishes overloaded functions from > parametric polymorphic functions. A principal type theorem has been proved: see, for example, Theorem 1 in [1]. Kind regards, Carlos [1] Ambiguity and Constrained Polymorphism, Carlos Camarão, Lucília Figueiredo, Rodrigo Ribeiro, Science of Computer Programming 124(1), 1--19, August 2016. On Mon, 8 Oct 2018 at 20:03, Anthony Clayden wrote: > On Tue, 9 Oct 2018 at 7:30 AM, wrote: > > Thanks Carlos. I wish I could say thank you for clarifying, but I'm afraid > this is as muddled as all the comments on the two proposals. > > I don't want to go over it again. I just want to say that my suggestion > earlier in the thread is fundamentally different. > > Em 2018-10-08 06:21, Anthony Clayden escreveu: >> > On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote: >> > > > > Strange: Simon's message has not appeared on the forum (he did send to > it). I've quoted it in full in my reply, but did break it into separate > pieces. > > >> >> Global instance scope is not ok either: instances should be modular. > > > I just plain disagree. Fundamentally. > > >> > >> > Wadler & Blott's 1988 paper last paragraph had already explained: "But >> > there is no principal type! " >> >> There is always a principal type, for every expression. >> Of course the type depends on the context where the expression occurs. > > > Then it's not a _principal_ type for the expression, it's just a local > type. > http://foldoc.org/principal > > We arrive at the principal type by unifying the principal types of the > sub-expressions, down to the principal types of each atom. W&B are pointing > out that without global scope for instances, typing cannot assign a > principal type to each method. (They left that as an open problem at the > end of the paper. Haskell has resolved that problem by making all instances > global. Changing Haskell to modular instances would be a > breakage. Fundamentally.) > > Under my suggestion, we can assign a (global) principal type to each > method -- indeed you must, by giving a signature very similar to a class > declaration; and that distinguishes overloaded functions from parametric > polymorphic functions. > > > AntC > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flippa at flippac.org Tue Oct 9 06:38:37 2018 From: flippa at flippac.org (Philippa Cowderoy) Date: Tue, 9 Oct 2018 07:38:37 +0100 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> <48c4c884-24bd-3247-c0ef-4827f1e4ee35@flippac.org> Message-ID: <2b9b968b-4b65-510e-85e6-5a8c540ee803@flippac.org> On 09/10/2018 00:58, Mario Blažević wrote: > On 2018-10-07 11:32 PM, Philippa Cowderoy wrote: >> >> I'd be remiss if I didn't suggest a candidate with a specific >> problem, a specific goal and a possible solution to its problem. So, >> a modest proposal: >> >> - Standardise OverloadedStrings as an available-but-disabled feature >> - Allow default statements for the IsString class without >> OverloadedStrings, using that type for all string literals >> - At some future stage, we can use this to migrate away from [Char] >> as the default string literal type >> - The Haskell2010 pragma and its successors can be used to ensure >> code written to standard doesn't suffer bit rot when migration happens > > > The second bullet point could use some clarification. Would you mind > commenting on the existing defaulting proposal at > https://github.com/haskell/rfcs/pull/18 ? > > I've been away from the overall process and I'm not in best health right now, so it'll take a while to catch up but I'll do my best. The gist (which someone else may have covered) is that it's not overloading if you have exactly one type of string literals, even if we let the user say which type it is. I've read enough to see I'll have to have my thinking cap on while writing much more than that as a useful comment though, not least because of the interaction with an imported defaults mechanism that normally needs to work with a sequence of defaults. From flippa at flippac.org Tue Oct 9 07:58:27 2018 From: flippa at flippac.org (Philippa Cowderoy) Date: Tue, 9 Oct 2018 08:58:27 +0100 Subject: Quo vadis? In-Reply-To: References: <637a88e8-3085-3208-791e-55e3cd50d03b@ciktel.net> <5BB762C2.5070604@exmail.nottingham.ac.uk> <48c4c884-24bd-3247-c0ef-4827f1e4ee35@flippac.org> Message-ID: <848846a8-207b-b9ee-fa8c-6980f43d0687@flippac.org> On 09/10/2018 00:58, Mario Blažević wrote: > On 2018-10-07 11:32 PM, Philippa Cowderoy wrote: >> >> I'd be remiss if I didn't suggest a candidate with a specific >> problem, a specific goal and a possible solution to its problem. So, >> a modest proposal: >> >> - Standardise OverloadedStrings as an available-but-disabled feature >> - Allow default statements for the IsString class without >> OverloadedStrings, using that type for all string literals >> - At some future stage, we can use this to migrate away from [Char] >> as the default string literal type >> - The Haskell2010 pragma and its successors can be used to ensure >> code written to standard doesn't suffer bit rot when migration happens > > > The second bullet point could use some clarification. Would you mind > commenting on the existing defaulting proposal at > https://github.com/haskell/rfcs/pull/18 ? > > Found the spoons - done! From carter.schonwald at gmail.com Wed Oct 10 15:52:22 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 10 Oct 2018 11:52:22 -0400 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Carlos, local scoping for type classes is flat out not gonna happen in the haskell language standard any time soon. if you want to make a case for it, demonstrate its utility, this mailing list isn't for that. Especially for something that fundamentally changes the programming model of the language in question in a way that isn't compatible merry adventures! -Carter On Mon, Oct 8, 2018 at 8:47 PM Carlos Camarao wrote: > Hi. > > > Thanks Carlos. I wish I could say thank you for clarifying, but I'm > > afraid this is as muddled as all the comments on the two proposals. > > > > I don't want to go over it again. I just want to say that my > > suggestion earlier in the thread is fundamentally different. > > > >> Global instance scope is not ok either: instances should be modular. > > I just plain disagree. Fundamentally. > > Global instance scope is not required for principal typing: a > principal type is (just) a type of an expression in a given typing > context that has all other types of this expression in that typing > context as instances. > > (Also: instance modularity is not the central issue.) > > >>> Wadler & Blott's 1988 paper last paragraph had already explained: > "But > >>> there is no principal type! " > > >> There is always a principal type, for every expression. > >> Of course the type depends on the context where the expression > occurs. > > > Then it's not a _principal_ type for the expression, it's just a local > type. > > http://foldoc.org/principal > > A type system has the principal type property if, given a > term and a typing context, there exists a type for this term in this > typing context such that all other types for this term in this typing > context are an instance of this type. > > > We arrive at the principal type by unifying the principal types of > > the sub-expressions, down to the principal types of each atom. W&B > > are pointing out that without global scope for instances, typing > > cannot assign a principal type to each method. (They left that as an > > open problem at the end of the paper. Haskell has resolved that > > problem by making all instances global. Changing Haskell to modular > > instances would be a breakage. Fundamentally.) > > > > Under my suggestion, we can assign a (global) principal type to each > > method -- indeed you must, by giving a signature very similar to a > > class declaration; and that distinguishes overloaded functions from > > parametric polymorphic functions. > > A principal type theorem has been proved: see, for example, Theorem 1 in > [1]. > > Kind regards, > > Carlos > > [1] Ambiguity and Constrained Polymorphism, > Carlos Camarão, Lucília Figueiredo, Rodrigo Ribeiro, > Science of Computer Programming 124(1), 1--19, August 2016. > > > On Mon, 8 Oct 2018 at 20:03, Anthony Clayden > wrote: > >> On Tue, 9 Oct 2018 at 7:30 AM, wrote: >> >> Thanks Carlos. I wish I could say thank you for clarifying, but I'm >> afraid this is as muddled as all the comments on the two proposals. >> >> I don't want to go over it again. I just want to say that my suggestion >> earlier in the thread is fundamentally different. >> >> Em 2018-10-08 06:21, Anthony Clayden escreveu: >>> > On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote: >>> > >> >> >> Strange: Simon's message has not appeared on the forum (he did send to >> it). I've quoted it in full in my reply, but did break it into separate >> pieces. >> >> >>> >>> Global instance scope is not ok either: instances should be modular. >> >> >> I just plain disagree. Fundamentally. >> >> >>> > >>> > Wadler & Blott's 1988 paper last paragraph had already explained: "But >>> > there is no principal type! " >>> >>> There is always a principal type, for every expression. >>> Of course the type depends on the context where the expression occurs. >> >> >> Then it's not a _principal_ type for the expression, it's just a local >> type. >> http://foldoc.org/principal >> >> We arrive at the principal type by unifying the principal types of the >> sub-expressions, down to the principal types of each atom. W&B are pointing >> out that without global scope for instances, typing cannot assign a >> principal type to each method. (They left that as an open problem at the >> end of the paper. Haskell has resolved that problem by making all instances >> global. Changing Haskell to modular instances would be a >> breakage. Fundamentally.) >> >> Under my suggestion, we can assign a (global) principal type to each >> method -- indeed you must, by giving a signature very similar to a class >> declaration; and that distinguishes overloaded functions from parametric >> polymorphic functions. >> >> >> AntC >> _______________________________________________ >> Haskell-prime mailing list >> Haskell-prime at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime >> > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > -------------- next part -------------- An HTML attachment was scrubbed... URL: From camarao at dcc.ufmg.br Wed Oct 10 17:36:35 2018 From: camarao at dcc.ufmg.br (camarao at dcc.ufmg.br) Date: Wed, 10 Oct 2018 14:36:35 -0300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Hi Carter, I am not proposing "local scoping". I think local scoping does not have substantial gains and at least introduces some difficulties and complexity (I have tried it in system CT). Even modular scope for instances is not mandatory, as I said. A general defaulting rule is a remedy, if instance modular scope is not supported, for changing the ambiguity rule (I prefer modular instance scoping though). I don't want to fight for anything. I'd like to contribute if the Haskell community friendly wishes me to do so in order to introduce MPTCs in a relatively simple way, without the need of extra mechanisms, based essentially on changing the ambiguity rule: I think a type like, say, (F a b, X a) => b is not ambiguous (F and X being classes with members f:: a->b and x::a, say), since then overloading of (f x) can be resolved, with a new ambiguity rule, depending on the context (or type) where (f x) is used. Kind regards, Carlos Em 2018-10-10 12:52, Carter Schonwald escreveu: > Carlos, local scoping for type classes is flat out not gonna happen in > the haskell language standard any time soon. > > if you want to make a case for it, demonstrate its utility, this > mailing list isn't for that. Especially for something that > fundamentally changes the programming model of the language in > question in a way that isn't compatible > > merry adventures! > -Carter > > On Mon, Oct 8, 2018 at 8:47 PM Carlos Camarao > wrote: > >> Hi. >> >>> Thanks Carlos. I wish I could say thank you for clarifying, but >> I'm >>> afraid this is as muddled as all the comments on the two >> proposals. >>> >>> I don't want to go over it again. I just want to say that my >>> suggestion earlier in the thread is fundamentally different. >>> >>>> Global instance scope is not ok either: instances should be >> modular. >>> I just plain disagree. Fundamentally. >> >> Global instance scope is not required for principal typing: a >> principal type is (just) a type of an expression in a given typing >> context that has all other types of this expression in that typing >> context as instances. >> >> (Also: instance modularity is not the central issue.) >> >>>>> Wadler & Blott's 1988 paper last paragraph had already >> explained: "But >>>>> there is no principal type! " >> >>>> There is always a principal type, for every expression. >>>> Of course the type depends on the context where the >> expression occurs. >> >>> Then it's not a _principal_ type for the expression, it's just a >> local type. >>> http://foldoc.org/principal >> >> A type system has the principal type property if, given a >> term and a typing context, there exists a type for this term in this >> typing context such that all other types for this term in this >> typing >> context are an instance of this type. >> >>> We arrive at the principal type by unifying the principal types of >>> the sub-expressions, down to the principal types of each atom. W&B >>> are pointing out that without global scope for instances, typing >>> cannot assign a principal type to each method. (They left that as >> an >>> open problem at the end of the paper. Haskell has resolved that >>> problem by making all instances global. Changing Haskell to >> modular >>> instances would be a breakage. Fundamentally.) >>> >>> Under my suggestion, we can assign a (global) principal type to >> each >>> method -- indeed you must, by giving a signature very similar to a >>> class declaration; and that distinguishes overloaded functions >> from >>> parametric polymorphic functions. >> >> A principal type theorem has been proved: see, for example, Theorem >> 1 in [1]. >> >> Kind regards, >> >> Carlos >> >> [1] Ambiguity and Constrained Polymorphism, >> Carlos Camarão, Lucília Figueiredo, Rodrigo Ribeiro, >> Science of Computer Programming 124(1), 1--19, August 2016. >> >> On Mon, 8 Oct 2018 at 20:03, Anthony Clayden >> wrote: >> >> On Tue, 9 Oct 2018 at 7:30 AM, wrote: >> >> Thanks Carlos. I wish I could say thank you for clarifying, but I'm >> afraid this is as muddled as all the comments on the two proposals. >> >> I don't want to go over it again. I just want to say that my >> suggestion earlier in the thread is fundamentally different. >> >> Em 2018-10-08 06:21, Anthony Clayden escreveu: >>> On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote: >>> >> >> Strange: Simon's message has not appeared on the forum (he did send >> to it). I've quoted it in full in my reply, but did break it into >> separate pieces. >> >> Global instance scope is not ok either: instances should be modular. >> >> I just plain disagree. Fundamentally. >> >>> >>> Wadler & Blott's 1988 paper last paragraph had already explained: >> "But >>> there is no principal type! " >> >> There is always a principal type, for every expression. >> Of course the type depends on the context where the expression >> occurs. >> >> Then it's not a _principal_ type for the expression, it's just a >> local type. >> >> http://foldoc.org/principal >> >> We arrive at the principal type by unifying the principal types of >> the sub-expressions, down to the principal types of each atom. W&B >> are pointing out that without global scope for instances, typing >> cannot assign a principal type to each method. (They left that as an >> open problem at the end of the paper. Haskell has resolved that >> problem by making all instances global. Changing Haskell to modular >> instances would be a breakage. Fundamentally.) >> >> Under my suggestion, we can assign a (global) principal type to each >> method -- indeed you must, by giving a signature very similar to a >> class declaration; and that distinguishes overloaded functions from >> parametric polymorphic functions. >> >> AntC _______________________________________________ >> Haskell-prime mailing list >> Haskell-prime at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime From carter.schonwald at gmail.com Wed Oct 10 18:55:45 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 10 Oct 2018 14:55:45 -0400 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: ok, cool! I'm not sure what modular scoping would look like, but it'd be fun what that looks like! I do think that the prime list isn't the best list though for figuring that out / experimentations thereof :) On Wed, Oct 10, 2018 at 1:36 PM wrote: > Hi Carter, > > I am not proposing "local scoping". I think local scoping > does not have substantial gains and at least introduces > some difficulties and complexity (I have tried it in system CT). > > Even modular scope for instances is not mandatory, as I said. > A general defaulting rule is a remedy, if instance modular scope is not > supported, for changing the ambiguity rule > (I prefer modular instance scoping though). > > I don't want to fight for anything. I'd like to contribute > if the Haskell community friendly wishes me to do so in order to > introduce MPTCs in a relatively simple way, without the need of extra > mechanisms, based essentially on changing the ambiguity rule: > I think a type like, say, (F a b, X a) => b is not ambiguous > (F and X being classes with members f:: a->b and x::a, say), > since then overloading of (f x) can be resolved, with a new > ambiguity rule, depending on the context (or type) where (f x) is used. > > Kind regards, > > Carlos > > Em 2018-10-10 12:52, Carter Schonwald escreveu: > > Carlos, local scoping for type classes is flat out not gonna happen in > > the haskell language standard any time soon. > > > > if you want to make a case for it, demonstrate its utility, this > > mailing list isn't for that. Especially for something that > > fundamentally changes the programming model of the language in > > question in a way that isn't compatible > > > > merry adventures! > > -Carter > > > > On Mon, Oct 8, 2018 at 8:47 PM Carlos Camarao > > wrote: > > > >> Hi. > >> > >>> Thanks Carlos. I wish I could say thank you for clarifying, but > >> I'm > >>> afraid this is as muddled as all the comments on the two > >> proposals. > >>> > >>> I don't want to go over it again. I just want to say that my > >>> suggestion earlier in the thread is fundamentally different. > >>> > >>>> Global instance scope is not ok either: instances should be > >> modular. > >>> I just plain disagree. Fundamentally. > >> > >> Global instance scope is not required for principal typing: a > >> principal type is (just) a type of an expression in a given typing > >> context that has all other types of this expression in that typing > >> context as instances. > >> > >> (Also: instance modularity is not the central issue.) > >> > >>>>> Wadler & Blott's 1988 paper last paragraph had already > >> explained: "But > >>>>> there is no principal type! " > >> > >>>> There is always a principal type, for every expression. > >>>> Of course the type depends on the context where the > >> expression occurs. > >> > >>> Then it's not a _principal_ type for the expression, it's just a > >> local type. > >>> http://foldoc.org/principal > >> > >> A type system has the principal type property if, given a > >> term and a typing context, there exists a type for this term in this > >> typing context such that all other types for this term in this > >> typing > >> context are an instance of this type. > >> > >>> We arrive at the principal type by unifying the principal types of > >>> the sub-expressions, down to the principal types of each atom. W&B > >>> are pointing out that without global scope for instances, typing > >>> cannot assign a principal type to each method. (They left that as > >> an > >>> open problem at the end of the paper. Haskell has resolved that > >>> problem by making all instances global. Changing Haskell to > >> modular > >>> instances would be a breakage. Fundamentally.) > >>> > >>> Under my suggestion, we can assign a (global) principal type to > >> each > >>> method -- indeed you must, by giving a signature very similar to a > >>> class declaration; and that distinguishes overloaded functions > >> from > >>> parametric polymorphic functions. > >> > >> A principal type theorem has been proved: see, for example, Theorem > >> 1 in [1]. > >> > >> Kind regards, > >> > >> Carlos > >> > >> [1] Ambiguity and Constrained Polymorphism, > >> Carlos Camarão, Lucília Figueiredo, Rodrigo Ribeiro, > >> Science of Computer Programming 124(1), 1--19, August 2016. > >> > >> On Mon, 8 Oct 2018 at 20:03, Anthony Clayden > >> wrote: > >> > >> On Tue, 9 Oct 2018 at 7:30 AM, wrote: > >> > >> Thanks Carlos. I wish I could say thank you for clarifying, but I'm > >> afraid this is as muddled as all the comments on the two proposals. > >> > >> I don't want to go over it again. I just want to say that my > >> suggestion earlier in the thread is fundamentally different. > >> > >> Em 2018-10-08 06:21, Anthony Clayden escreveu: > >>> On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote: > >>> > >> > >> Strange: Simon's message has not appeared on the forum (he did send > >> to it). I've quoted it in full in my reply, but did break it into > >> separate pieces. > >> > >> Global instance scope is not ok either: instances should be modular. > >> > >> I just plain disagree. Fundamentally. > >> > >>> > >>> Wadler & Blott's 1988 paper last paragraph had already explained: > >> "But > >>> there is no principal type! " > >> > >> There is always a principal type, for every expression. > >> Of course the type depends on the context where the expression > >> occurs. > >> > >> Then it's not a _principal_ type for the expression, it's just a > >> local type. > >> > >> http://foldoc.org/principal > >> > >> We arrive at the principal type by unifying the principal types of > >> the sub-expressions, down to the principal types of each atom. W&B > >> are pointing out that without global scope for instances, typing > >> cannot assign a principal type to each method. (They left that as an > >> open problem at the end of the paper. Haskell has resolved that > >> problem by making all instances global. Changing Haskell to modular > >> instances would be a breakage. Fundamentally.) > >> > >> Under my suggestion, we can assign a (global) principal type to each > >> method -- indeed you must, by giving a signature very similar to a > >> class declaration; and that distinguishes overloaded functions from > >> parametric polymorphic functions. > >> > >> AntC _______________________________________________ > >> Haskell-prime mailing list > >> Haskell-prime at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > > _______________________________________________ > > Haskell-prime mailing list > > Haskell-prime at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > > _______________________________________________ > > Haskell-prime mailing list > > Haskell-prime at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony_clayden at clear.net.nz Thu Oct 11 01:16:09 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Thu, 11 Oct 2018 14:16:09 +1300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote You may be interested in Carlos Camarao’s interesting work. For a long > time now he has advocated (in effect) making each function into its own > type class, rather that grouping them into classes. Perhaps that is in > line with your thinking. > Could I ask Simon direct, since it was he who introduced the topic. When you say "interesting work", what is that evaluation based on? Is there a paper or summary you've seen that expresses the ideas? (Because despite the exhaustive and exhausting rounds on github last year, and further comment on this thread, I just can't see anything workable. And the papers that Carlos references ring alarm bells for me, just looking at the Abstracts, let alone delving into the impenetrable type theory.) And could I ask Carlos: are we allowed to know who peer-reviewed your papers? Specifically, was it someone who's up with the state of the art in GHC? Carlos/his team are making bold claims: to provide the functionality of FunDeps/Type functions, Overlapping Instances/Closed Type Families, to avoid the infamous `read . show` ambiguity, to avoid the equally infamous 'orphan instances' incoherence, to preserve principal typing, and to keep it "simple". Then I have to say that if there's evidence for those claims, it's not appearing in the papers. Really the only example presented is `read . show` (plus record field disambiguation). Yes I'd hope the approach for a simple example is "simple". It doesn't seem any more simple than putting an explicit type signature (or we could use type application `@` these days). But I don't expect that would be the place to show off the power/expressivity. Thank you AntC -------------- next part -------------- An HTML attachment was scrubbed... URL: From camarao at dcc.ufmg.br Fri Oct 12 14:48:43 2018 From: camarao at dcc.ufmg.br (camarao at dcc.ufmg.br) Date: Fri, 12 Oct 2018 11:48:43 -0300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Hi. A concise proposal for the introduction of MPTCs in Haskell follows. A similar ghc-proposal has been written before, but without success (certainly it would be better if some experimentation in ghc was done first, as Carter suggested). The proposal is based essentially on the following (1. crucial, 2. desirable): 1. Change the ambiguty rule. Consider for example: class F a b where f:: a → b class X a where x:: a fx = f x The type of fx, (F a b, X a) ⇒ b, should not be ambiguous: in distinct contexts, fx can have distinct types (if ambiguity, and 'improvement', are as defined below). Note: agreeing with this view can lead to far-reaching consequences, e.g. support of overloaded record fields [1,Section 7], polymonads [2] etc. Further examples can be discussed but this example conveys the main idea that ambiguity should be changed; unlike the example of (show . read), no type annotation can avoid ambiguity of polymorphic fx in current Haskell. Ambiguity should be a property of an expression, defined as follows: if a constraint set C on the constrained type C∪D⇒t of an expression has unreachable variables, then satisfiability is tested for C (a definition of reachable and unreachable type variables is at the end of this message), and: - if there is a single unifying substitution of C with the set of instances in the current context (module), then C can be removed (overloading is resolved) and C∪D⇒t 'improved' to D⇒t; - otherwise there is a type error: ambiguity if there are two or more unifying substitutions of C with the set of instances in the current context (module), and unsatisfiability otherwise. 2. Allow instances to be imported (all instances are assumed to be exported): import M (instance A τ₁ ⋯ τₙ , … ) specifies that the instance of τ₁ ⋯ τₙ for class A is imported from M, in the module where the import clause occurs. In case of no explicit importation, all instances remain imported, as currently in Haskell (in order that well-typed programs remain well-typed). Comments, corrections etc. are welcome. If the ideas are welcome or lead to a related welcome proposal, then a detailed one, with changes to the Haskell report, can be worked out. (Definition: [Reachable and unreachable type variables in constraints] Consider a constrainted type C∪D⇒τ. A variable a ∈ tv(C) is reachable from V = tv(τ) if a ∈ V or if a ∈ tv(π) for some π ∈ C such that there exists b ∈ tv(π) such that b is reachable from V; otherwise it is unreachable. For example, in (F a b, X a) ⇒ b, type variable 'a' is reachable from { b }, because 'a' occurs in constraint F a b, and b is reachable. Similarly, if C = (F a b, G b c, X c), then c is reachable from {a}.) Kind regards, Carlos [1] Optional Type Classes for Haskell, Rodrigo Ribeiro, Carlos Camarão, Lucília Figueiredo, Cristiano Vasconcellos, SBLP'2016 (20th Brazilian Symposium on Programming Languages), Marília, SP, September 19-23, 2016. [2] https://github.com/carlos1camarao/ghc-proposals/blob/d81c1f26298961ac635ce0724bb76164b418866b/expression-ambiguity.rst === Em 2018-10-10 22:16, Anthony Clayden escreveu: > On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote > >> You may be interested in Carlos Camarao’s interesting work. For a >> long time now he has advocated (in effect) making each function into >> its own type class, rather that grouping them into classes. >> Perhaps that is in line with your thinking. > >> > > Could I ask Simon direct, since it was he who introduced the topic. > When you say "interesting work", what is that evaluation based on? Is > there a paper or summary you've seen that expresses the ideas? > (Because despite the exhaustive and exhausting rounds on github last > year, and further comment on this thread, I just can't see anything > workable. And the papers that Carlos references ring alarm bells for > me, just looking at the Abstracts, let alone delving into the > impenetrable type theory.) > > And could I ask Carlos: are we allowed to know who peer-reviewed your > papers? Specifically, was it someone who's up with the state of the > art in GHC? > > Carlos/his team are making bold claims: to provide the functionality > of FunDeps/Type functions, Overlapping Instances/Closed Type Families, > to avoid the infamous `read . show` ambiguity, to avoid the equally > infamous 'orphan instances' incoherence, to preserve principal typing, > and to keep it "simple". > > Then I have to say that if there's evidence for those claims, it's not > appearing in the papers. Really the only example presented is `read . > show` (plus record field disambiguation). Yes I'd hope the approach > for a simple example is "simple". It doesn't seem any more simple than > putting an explicit type signature (or we could use type application > `@` these days). But I don't expect that would be the place to show > off the power/expressivity. > > Thank you > AntC > _______________________________________________ > Haskell-prime mailing list > Haskell-prime at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime From anthony_clayden at clear.net.nz Sun Oct 14 03:20:01 2018 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Sun, 14 Oct 2018 16:20:01 +1300 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Thank you Carlos, but oh dear this is fast becoming as exasperating as the github rounds. We've been talking about modular/scoped instances. All of a sudden you've introduced MPTCs, which nobody even mentioned. And you have a definition of "reachable" which is not GHC's definition: '"reachable" is a conservative approximation to "functionally dependent".' https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/other-type-extensions.html (That's from an old version of the User's Guide. It works as an approximation because it assumes a) Functional Dependencies are declared; b) all methods observe the FunDeps declared for their class; c) all instances are consistence with the FunDeps. These days GHC has a better-structured approach, which SPJ explained in his comment on github last year.) Are your 1. and 2. below orthogonal? [Good] Then discuss them orthogonally. Does one build on the other? [Good] Then discuss the logically first one first. Does each have independent motivations, and then it's the combination of them that gives a quantum leap in expressivity? [Maybe good] Then discuss them independently at first. (Maybe two distinct proposals?) Are they all tangled together in a mess? [Not good] Then please try to untangle them before presenting a proposal. Carter is right, and I take the point. Haskell-prime is for considering changes to the language standard. It expects any proposal to be already well-established and stable in at least one version of Haskell. [**] The place to discuss 'speculative' proposals is nowadays github. Even on github, it would strengthen a proposal's appeal if there's an already-developed prototype, possibly not in Haskell but some related language. Proofs of useful properties (like Principal typing) add to the appeal, but are not strongly persuasive without lots of use cases. > Comments, corrections etc. are welcome. If the ideas are welcome or lead to a related welcome proposal, ... Possibly the place to try to clarify the ideas is the cafe. Or maybe if it's leading towards a proposal, a github 'Issue' rather than 'Pull request'. (That has the advantage of a more permanent record, and more markup/formatting than email without being as awkward to format as rst.) [**] You'd expect by the 'well-established' criterion, MPTCs would be a strong candidate for considering in Haskell-prime: already anticipated in 1988; two implementations, each from nearly 20 years ago, stable since about 2006. But there's a whole swirl of difficulties around them, which I tried to summarise here https://mail.haskell.org/pipermail/haskell-prime/2018-October/004367.html It's not exactly MPTCs, but how they interact with other typeclass/instance features. In none of the discussion on those github rounds did Carlos/Rodrigo seem to be aware of those difficulties -- which are well-known and well-documented. Then to come back to the issue of modular/scoped instances -- i.e. the infamous 'orphan instances'. For the record [***] let's explain Carter's >> local scoping for type classes is flat out not gonna happen in the haskell language standard any time soon. A. local scoping breaks referential transparency. I can't replace a function by its definition, and get the same behaviour. B. local scoping breaks parametricity. The behaviour of a function depends not only on its arguments (value and type) but also on some invisible typing baked into the function from what was in scope at its definition site. Referential transparency and Parametricity are powerful, simple principles for reasoning about programs. Anything that breaks them is immediately not "simple" IMO. C. local scoping also breaks something related; I'm not sure if there's a technical term: 'transparency of type improvement'? What gets baked in is an instance selection that includes type improvement. So even if I replace a function by its definition, **and** give it exactly the type signature from its definition site, I still get different behaviour -- specifically different type improvement. With C. we threaten type soundness: the compiler would be entitled to use either of the definition site's type improvement or the usage site's or both. And then infer different types for the same expression. And then get a segfault in the executable. If Carlos/team haven't experienced that yet in their prototypes, that's down to pure luck and GHC being so well-structured. (I'd guess GHC would catch it at the core lint typecheck, as a last resort.) The threat to soundness is Edward K's concern here https://mail.haskell.org/pipermail/haskell-prime/2018-April/004358.html [***] I say "for the record", partly to give a heads up to Juriaan, who started this thread, for educational/learning purposes; and partly because it's surprising how often 'local instances' come up on Haskell-prime. Where too often = more than never. AntC On Sat, 13 Oct 2018 at 4:04 AM, wrote: > Hi. > > A concise proposal for the introduction of MPTCs in Haskell follows. > A similar ghc-proposal has been written before, but without success > (certainly it would be better if some experimentation in ghc was done > first, as Carter suggested). The proposal is based essentially on the > following (1. crucial, 2. desirable): > > 1. Change the ambiguty rule. > > Consider for example: > > class F a b where f:: a → b > class X a where x:: a > fx = f x > > The type of fx, (F a b, X a) ⇒ b, should not be ambiguous: in > distinct contexts, fx can have distinct types (if ambiguity, and > 'improvement', are as defined below). Note: agreeing with this > view can lead to far-reaching consequences, e.g. support of > overloaded record fields [1,Section 7], polymonads [2] etc. > > Further examples can be discussed but this example conveys the > main idea that ambiguity should be changed; unlike the example > of (show . read), no type annotation can avoid ambiguity of > polymorphic fx in current Haskell. > > Ambiguity should be a property of an expression, defined as > follows: if a constraint set C on the constrained type C∪D⇒t of > an expression has unreachable variables, then satisfiability is > tested for C (a definition of reachable and unreachable type > variables is at the end of this message), and: > > - if there is a single unifying substitution of C with the set > of instances in the current context (module), then C can be > removed (overloading is resolved) and C∪D⇒t 'improved' to > D⇒t; > > - otherwise there is a type error: ambiguity if there are two > or more unifying substitutions of C with the set of instances > in the current context (module), and unsatisfiability > otherwise. > > 2. Allow instances to be imported (all instances are assumed to be > exported): > > import M (instance A τ₁ ⋯ τₙ , … ) > > specifies that the instance of τ₁ ⋯ τₙ for class A is > imported from M, in the module where the import clause > occurs. > > In case of no explicit importation, all instances remain > imported, as currently in Haskell (in order that well-typed > programs remain well-typed). > > Comments, corrections etc. are welcome. If the ideas are welcome or > lead to a related welcome proposal, then a detailed one, with changes > to the Haskell report, can be worked out. > > (Definition: [Reachable and unreachable type variables in constraints] > Consider a constrainted type C∪D⇒τ. A variable a ∈ tv(C) is reachable > from V = tv(τ) if a ∈ V or if a ∈ tv(π) for some π ∈ C such that there > exists b ∈ tv(π) such that b is reachable from V; otherwise it is > unreachable. > For example, in (F a b, X a) ⇒ b, type variable 'a' is reachable from > { b }, because 'a' occurs in constraint F a b, and b is reachable. > Similarly, if C = (F a b, G b c, X c), then c is reachable from {a}.) > > Kind regards, > > Carlos > > [1] Optional Type Classes for Haskell, > Rodrigo Ribeiro, Carlos Camarão, Lucília Figueiredo, Cristiano > Vasconcellos, > SBLP'2016 (20th Brazilian Symposium on Programming Languages), > Marília, SP, September 19-23, 2016. > > [2] > > https://github.com/carlos1camarao/ghc-proposals/blob/d81c1f26298961ac635ce0724bb76164b418866b/expression-ambiguity.rst > > === > Em 2018-10-10 22:16, Anthony Clayden escreveu: > > On Mon, 8 Oct 2018 at 8:41 PM, Simon Peyton Jones wrote > > > >> You may be interested in Carlos Camarao’s interesting work. For a > >> long time now he has advocated (in effect) making each function into > >> its own type class, rather that grouping them into classes. > >> Perhaps that is in line with your thinking. > > > >> > > > > Could I ask Simon direct, since it was he who introduced the topic. > > When you say "interesting work", what is that evaluation based on? Is > > there a paper or summary you've seen that expresses the ideas? > > (Because despite the exhaustive and exhausting rounds on github last > > year, and further comment on this thread, I just can't see anything > > workable. And the papers that Carlos references ring alarm bells for > > me, just looking at the Abstracts, let alone delving into the > > impenetrable type theory.) > > > > And could I ask Carlos: are we allowed to know who peer-reviewed your > > papers? Specifically, was it someone who's up with the state of the > > art in GHC? > > > > Carlos/his team are making bold claims: to provide the functionality > > of FunDeps/Type functions, Overlapping Instances/Closed Type Families, > > to avoid the infamous `read . show` ambiguity, to avoid the equally > > infamous 'orphan instances' incoherence, to preserve principal typing, > > and to keep it "simple". > > > > Then I have to say that if there's evidence for those claims, it's not > > appearing in the papers. Really the only example presented is `read . > > show` (plus record field disambiguation). Yes I'd hope the approach > > for a simple example is "simple". It doesn't seem any more simple than > > putting an explicit type signature (or we could use type application > > `@` these days). But I don't expect that would be the place to show > > off the power/expressivity. > > > > Thank you > > AntC > > _______________________________________________ > > Haskell-prime mailing list > > Haskell-prime at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.franksen at online.de Wed Oct 24 08:26:05 2018 From: ben.franksen at online.de (Ben Franksen) Date: Wed, 24 Oct 2018 10:26:05 +0200 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Am 06.10.2018 um 05:18 schrieb Anthony Clayden: > On Sat, 6 Oct 2018 at 9:47 AM, Petr Pudlák wrote: > such as the important laws between `return` and `>>=`. And then for example >> a class with just `return` doesn't give any information what `return x` >> means or what should be its properties. >> > > Then make Bind a superclass constraint on `return` (or vice versa, or both > ways). > > Just as the laws for Num's methods are defined in terms of equality > > x + negate x == fromInteger 0 -- for example > > Talking about laws is a red herring: you can't declare the laws/the > compiler doesn't enforce them or rely on them in any way. Indeed the > Lensaholics seem to take pleasure in building lenses that break the (van > Laarhoven) laws. I strongly disagree with this. Class laws are absolutely essential. They are the main distinguishing feature of Haskell classes versus the usual ad-hoc overloading found in most mainstream (e.g. OO) languages. Using '+' for string concatenation? That's just a poor work-around for languages that only support a fixed set of traditional operators. And if you have a Monoid or Semigroup class that doesn't require or even suggest commutativity of the operator, but clearly states that associativity is required, then I see absolutely no reason to use '+' for that. That the compiler can't enforce the laws is irrelevant. Laws are a contract and violating it is a bug. Non law-abiding lenses like 'filtered' are clearly documented with severe warnings attached. To cite them as proof that people take pleasure in violating class laws is ridiculous. Granted, classes that combine multiple methods are not /required/ to state laws. But they offer a convenient place where to put them. > For example, in Haskell we could have >> >> class (Return m, Bind m) => Monad m where >> >> without any methods specified. But instances of `Monad` should be only >> such types for which `return` and `>>=` satisfy the monad laws. >> > > First: what does "satisfy the xxx laws" mean? The Haskell report and GHC's > Prelude documentation state a bunch of laws; and it's a good discipline to > write down laws if you're creating a class; but it's only documentation. Why you say "only"? Documentation is essential and documentation in the form of laws (properties) is the most useful sort of documentation. And many class laws (though not all) /can/ be formally expressed as Haskell code and thus tested with e.g. quickcheck. > Arguably IO, the most commonly used Monad, breaks the Monad laws in rather > serious ways because it imposes sequence of execution; I think such a bold statement should be accompanied by an example that demonstrates it. Cheers Ben From ben.franksen at online.de Wed Oct 24 08:30:21 2018 From: ben.franksen at online.de (Ben Franksen) Date: Wed, 24 Oct 2018 10:30:21 +0200 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Am 08.10.2018 um 11:21 schrieb Anthony Clayden: > I wonder how different would have been the history of Haskell if Wadler had > not borrowed the terminology "class" and "method". Since Helium has a focus > on Haskell learners/beginners: I wonder how much confusion we might have > saved those coming from OOP where the terms mean something really quite > different. We might have avoided "class" altogether; and talked of > "overloaded function". Similar to C++, perhaps? Cheers Ben From simonpj at microsoft.com Mon Oct 8 06:56:51 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 08 Oct 2018 06:56:51 -0000 Subject: A question about run-time errors when class members are undefined In-Reply-To: References: Message-ID: Anthony You may be interested in Carlos Camarao’s interesting work. For a long time now he has advocated (in effect) making each function into its own type class, rather that grouping them into classes. Perhaps that is in line with your thinking. https://homepages.dcc.ufmg.br/~camarao/ Simon From: Haskell-prime On Behalf Of Anthony Clayden Sent: 06 October 2018 04:19 To: Petr Pudlák Cc: haskell-prime at haskell.org Subject: Re: A question about run-time errors when class members are undefined On Sat, 6 Oct 2018 at 9:47 AM, Petr Pudlák > wrote: IIRC one of the arguments against having many separate classes is that a class is not a just set of methods, it's also the relations between them, Hi Petr, I was talking about splitting out Haskell's current class hierarchy as a step towards doing away with classes altogether. If your language insists on methods being held in classes, that's just tedious bureacracy to invent class names. The relations between classes (including between single-method classes) can be captured through superclass constraints. For example, in the Haskell 2010 report class (Eq a, Show a) => Num a where ... such as the important laws between `return` and `>>=`. And then for example a class with just `return` doesn't give any information what `return x` means or what should be its properties. Then make Bind a superclass constraint on `return` (or vice versa, or both ways). Just as the laws for Num's methods are defined in terms of equality x + negate x == fromInteger 0 -- for example Talking about laws is a red herring: you can't declare the laws/the compiler doesn't enforce them or rely on them in any way. Indeed the Lensaholics seem to take pleasure in building lenses that break the (van Laarhoven) laws. That said, one of really painful points of Haskell is that refactoring a hierarchy of type-classes means breaking all the code that implements them. This was also one of the main reasons why reason making Applicative a superclass of Monad took so long. It'd be much nicer to design type-classes in such a way that an implementation doesn't have to really care about the exact hierarchy. Yes that's what I was saying. Unfortunately for Haskell's Num class, I think it's just too hard. So a new language has an opportunity to avoid that. If OTOH Helium wants to slavishly follow Haskell, I'm wondering what is the point of Helium. With Applicative, IIRC, refactoring had to wait until we got Constraint kinds and type families that could produce them. Would Helium want to put all that into a language aimed at beginners? For example, in Haskell we could have class (Return m, Bind m) => Monad m where without any methods specified. But instances of `Monad` should be only such types for which `return` and `>>=` satisfy the monad laws. First: what does "satisfy the xxx laws" mean? The Haskell report and GHC's Prelude documentation state a bunch of laws; and it's a good discipline to write down laws if you're creating a class; but it's only documentation. Arguably IO, the most commonly used Monad, breaks the Monad laws in rather serious ways because it imposes sequence of execution; and it would be unfit for purpose if it were pure/lazy function application. Then: what do you think a language could do to detect if some instance satisfies the laws? (Even supposing you could declare them.) And this would distinguish them from types that have both `Return` and `Bind` instances, but don't satisfy the laws. You could have distinct classes/distinct operators. Oh, but then `do` dotation would break. Unfortunately I'm not sure if there is a good solution for achieving both these directions. I don't think there's any solution for achieving "satisfy the xxx laws". AntC čt 4. 10. 2018 v 3:56 odesílatel Anthony Clayden > napsal: > We are adding classes and instances to Helium. > We wondered about the aspect that it is allowed to have a class instance > of which not all fields have a piece of code/value associated with them, ... I have a suggestion for that. But first let me understand where you're going with Helium. Are you aiming to slavishly reproduce Haskell's classes/instances, or is this a chance for a rethink? Will you want to include associated types and associated datatypes in the classes? Note those are just syntactic sugar for top-level type families and data families. It does aid readability to put them within the class. I would certainly rethink the current grouping of methods into classes. Number purists have long wanted to split class Num into Additive vs Multiplicative. (Additive would be a superclass of Multiplicative.) For the Naturals perhaps we want Presburger arithmetic then Additive just contains (+), with `negate` certainly in a different class, perhaps (-) subtract also in a dedicated class. Also there's people wanting Monads with just `bind` not `return`. But restructuring the Prelude classes/methods is just too hard with all that legacy code. Even though you should be able to do: class (Additive a, Subtractive a, Negative a, Multiplicative a, Divisive a) => Num a Note there's a lot of classes with a single method, and that seems to be an increasing trend. Historically it wasn't so easy in Haskell to do that superclass constraints business; if it had been perhaps there would be more classes with a single method. Then there's some disadvantages to classes holding multiple methods: * the need to provide an overloading for every method, even though it may not make sense (or suffer a run-time error, as you say) * the inability to 'fine tune' methods for a specific datatype [**] * an internal compiler/object code cost of passing a group of methods in a dictionary as tuple (as apposed to directly selecting a single method) [**] Nats vs Integrals vs Fractionals for `Num`; and (this will be controversial, but ...) Some people want to/some languages do use (+) for concatenating Strings/lists. But the other methods in `Num` don't make any sense. If all your classes have a single method, the class name would seem to be superfluous, and the class/instance decl syntax seems too verbose. So here's a suggestion. I'll need to illustrate with some definite syntax, but there's nothing necessary about it. (I'll borrow the Explicit Type Application `@`.) To give an instance overloading for method `show` or (==) show @Int = primShowInt -- in effect pattern matching on the type (==) @Int = primEqInt -- so see showList below That is: I'm giving an overloading for those methods on type `Int`. How do I declare those methods are overloadable? In their signature: show @a :: a -> String -- compare show :: Show a => a -> String (==) @a :: a -> a -> Bool Non-overladable functions don't have `@a` to the left of `::`. How do I show that a class has a superclass constraint? That is: a method has a supermethod constraint, we'll still use `=>`: show @a :: showsPrec @a => a -> String -- supermethod constraint show @[a] :: show a => [a] -> String -- instance decl, because not bare a, with constraint => show @[a] xss = showList xss (*) @a :: (+) @a => a -> a -> a Is this idea completely off the wall? Take a look at Wadler's original 1988 memo introducing what became type classes. http://homepages.inf.ed.ac.uk/wadler/papers/class-letter/class-letter.txt It reviews several possible designs, but not all those possibilities made it into his paper (with Stephen Blott) later in 1988/January 1989. In particular look at Section 1's 'Simple overloading'. It's what I'm suggesting above (modulo a bit of syntax). At the end of Section 1, Wadler rejects this design because of "potential blow-ups". But he should have pushed the idea a bit further. Perhaps he was scared to allow function/method names into type signatures? (I've already sneaked that in above with constraints.) These days Haskell is getting more relaxed about namespaces: the type `@`pplication exactly allows type names appearing in terms. So to counter his example, the programmer writes: square x = x * x -- no explicit signature given square :: (*) @a => a -> a -- signature inferred, because (*) is overloaded rms = sqrt . square -- no explicit signature rms :: sqrt @a => a -> a -- signature inferred Note the inferred signature for `rms` doesn't need `(*) @a` even though it's inferred from `square`. Because (*) is a supermethod of `sqrt`. `sqrt` might also have other supermethods, that amount to `Floating`. > ... a run-time error results. > > Does anyone know of a rationale for this choice, since it seems rather unhaskell-like. If you allow default method implementations (in the class, as Cale points out), then I guess you have to allow instance decls that don't mention all the methods. I think there should at least be a warning if there's no default method. Also beware the default method might have a more specific signature, which means it can't be applied for some particular instance. Altogether, I'd say, the culprit is the strong bias in early Haskell to bunch methods together into classes. These days with Haskell's richer/more fine-tuned typeclass features: what do typeclasses do that can't be done more precisely at method level -- indeed that would _better_ be done at method level? AntC _______________________________________________ Haskell-prime mailing list Haskell-prime at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime -------------- next part -------------- An HTML attachment was scrubbed... URL: