From alberto at toscat.net Tue Jun 6 11:38:38 2017 From: alberto at toscat.net (Alberto Valverde) Date: Tue, 6 Jun 2017 13:38:38 +0200 Subject: 8.2.1-rc2 upgrade report Message-ID: Hi, I've finally managed to upgrade all the dependencies of the proprietary app I mentioned some days ago in this list and there are good and bad differences I've noticed between 8.0.2 that I'd like to share. The bad ----------- * An optimized cold build (-O2) is about 3 times slower (~53s vs. ~2m55s) and consumes more memory (~2Gb vs. ~7Gb) at it's peak. The good ------------- * An un-optimized cold build (-O0) takes about the same time (~21s, phew! :) It's maybe even slightly faster with 8.2 (too few and badly taken measurements to really know, though) * The optimized executable is slightly faster and allocates less memory. For this app it makes up for the performance regression of the optimized build (which is almost always done by CI), IMHO. I did only a couple of runs and only wrote down [1] the last run results (which were similar to the previous results) so take these observations with a grain of salt (except maybe the optimized build slowdown, which doesn't have much margin for variance to be skewing the results). I also measured the peak memory usage by observing "top". In case gives a clue: The app is a multi-threaded 2D spread simulator which deals with many mmapped Storable mutable vectors and has been pretty optimized for countless hours (I mean by this that it has (too) many INLINE pragmas. Mostly on polymorphic functions to aid in their specialization). I think some of this information can be deduced from the results I'm linking at the footer. I believe the INLINEs are playing a big part of the slowdown since the slowest modules to compile are the "Main" ones which put everything together, along with the typical lens-th-heavy "Types" ones. I'd like to help by producing a reproducible and isolated benchmark or a better analysis or ... so someone more knowledgeable than me on GHC internals can someday hopefully attack the regression. Any pointers on what would help and where can I learn to do it? Thanks! [1] https://gist.github.com/albertov/46fbb13d940f67a569f9a25c1cb8154c -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jun 6 11:58:23 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Jun 2017 11:58:23 +0000 Subject: 8.2.1-rc2 upgrade report In-Reply-To: References: Message-ID: Thanks for the report. Going from 67G to 56G allocation is a very worthwhile improvement in runtime! Hurrah. However, trebling compile time is very bad. It is (I think) far from typical: generally 8.2 is faster at compiling than 8.0 so you must be hitting something weird. Anything you can do to make a reproducible case would be helpful. -dshow-passes shows the size of each intermediate form, which at least sometimes shows where the big changes are. Simon From: Glasgow-haskell-users [mailto:glasgow-haskell-users-bounces at haskell.org] On Behalf Of Alberto Valverde Sent: 06 June 2017 12:39 To: GHC users Subject: 8.2.1-rc2 upgrade report Hi, I've finally managed to upgrade all the dependencies of the proprietary app I mentioned some days ago in this list and there are good and bad differences I've noticed between 8.0.2 that I'd like to share. The bad ----------- * An optimized cold build (-O2) is about 3 times slower (~53s vs. ~2m55s) and consumes more memory (~2Gb vs. ~7Gb) at it's peak. The good ------------- * An un-optimized cold build (-O0) takes about the same time (~21s, phew! :) It's maybe even slightly faster with 8.2 (too few and badly taken measurements to really know, though) * The optimized executable is slightly faster and allocates less memory. For this app it makes up for the performance regression of the optimized build (which is almost always done by CI), IMHO. I did only a couple of runs and only wrote down [1] the last run results (which were similar to the previous results) so take these observations with a grain of salt (except maybe the optimized build slowdown, which doesn't have much margin for variance to be skewing the results). I also measured the peak memory usage by observing "top". In case gives a clue: The app is a multi-threaded 2D spread simulator which deals with many mmapped Storable mutable vectors and has been pretty optimized for countless hours (I mean by this that it has (too) many INLINE pragmas. Mostly on polymorphic functions to aid in their specialization). I think some of this information can be deduced from the results I'm linking at the footer. I believe the INLINEs are playing a big part of the slowdown since the slowest modules to compile are the "Main" ones which put everything together, along with the typical lens-th-heavy "Types" ones. I'd like to help by producing a reproducible and isolated benchmark or a better analysis or ... so someone more knowledgeable than me on GHC internals can someday hopefully attack the regression. Any pointers on what would help and where can I learn to do it? Thanks! [1] https://gist.github.com/albertov/46fbb13d940f67a569f9a25c1cb8154c -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.farkasdyck at gmail.com Tue Jun 6 20:34:57 2017 From: m.farkasdyck at gmail.com (M Farkas-Dyck) Date: Tue, 6 Jun 2017 12:34:57 -0800 Subject: Profiling plugins Message-ID: How is this done? I am working on ConCat [https://github.com/conal/concat] and we need a profile of the plugin itself. I tried "stack test --profile" but that does a profile of the test program, not the plugin. Can i do this and not rebuild GHC? From alberto at toscat.net Thu Jun 8 12:57:12 2017 From: alberto at toscat.net (Alberto Valverde) Date: Thu, 8 Jun 2017 14:57:12 +0200 Subject: 8.2.1-rc2 upgrade report In-Reply-To: References: Message-ID: Hi Simon, Thanks for the pointer. I re-did both builds with -dshow-passes and made a small script to plot the results of the lines which summarize the elapsed time and allocated memory per phase and module. I've uploaded the raw logs, a plot of the results and the script I wrote to generate it to https://gist.githubusercontent.com/albertov/145ac5c01bfbadc5c9ff55e9c5c2e50e . The plotted results live here https://gist.githubusercontent.com/albertov/145ac5c01bfbadc5c9ff55e9c5c2e50e/raw/8996644707fc5c18c1d42ad43ee31b1817509384/bench.png Apparently, the biggest slowdown in respect to 8.0.2 seems to occur in the SpecConstr and Simplifier passes in the Propag (where the "main" function is) and the Sigym4.Propag.Engine (where the main algorithm lives) modules. Any other tests that would be helpful for me to run? I'm not sure where to start to create a reproducible case but I'll see if I can come up with something soon... Alberto On Tue, Jun 6, 2017 at 1:58 PM, Simon Peyton Jones wrote: > Thanks for the report. > > > > Going from 67G to 56G allocation is a very worthwhile improvement in > runtime! Hurrah. > > > > However, trebling compile time is very bad. It is (I think) far from > typical: generally 8.2 is *faster* at compiling than 8.0 so you must be > hitting something weird. Anything you can do to make a reproducible case > would be helpful. -dshow-passes shows the size of each intermediate form, > which at least sometimes shows where the big changes are. > > > > Simon > > > > *From:* Glasgow-haskell-users [mailto:glasgow-haskell-users- > bounces at haskell.org] *On Behalf Of *Alberto Valverde > *Sent:* 06 June 2017 12:39 > *To:* GHC users > *Subject:* 8.2.1-rc2 upgrade report > > > > Hi, > > > > I've finally managed to upgrade all the dependencies of the proprietary > app I mentioned some days ago in this list and there are good and bad > differences I've noticed between 8.0.2 that I'd like to share. > > > > The bad > > ----------- > > > > * An optimized cold build (-O2) is about 3 times slower (~53s vs. ~2m55s) > and consumes more memory (~2Gb vs. ~7Gb) at it's peak. > > > > The good > > ------------- > > > > * An un-optimized cold build (-O0) takes about the same time (~21s, phew! > :) It's maybe even slightly faster with 8.2 (too few and badly taken > measurements to really know, though) > > * The optimized executable is slightly faster and allocates less memory. > For this app it makes up for the performance regression of the optimized > build (which is almost always done by CI), IMHO. > > > > I did only a couple of runs and only wrote down [1] the last run results > (which were similar to the previous results) so take these observations > with a grain of salt (except maybe the optimized build slowdown, which > doesn't have much margin for variance to be skewing the results). I also > measured the peak memory usage by observing "top". > > > > In case gives a clue: The app is a multi-threaded 2D spread simulator > which deals with many mmapped Storable mutable vectors and has been pretty > optimized for countless hours (I mean by this that it has (too) many INLINE > pragmas. Mostly on polymorphic functions to aid in their specialization). I > think some of this information can be deduced from the results I'm linking > at the footer. I believe the INLINEs are playing a big part of the slowdown > since the slowest modules to compile are the "Main" ones which put > everything together, along with the typical lens-th-heavy "Types" ones. > > > > I'd like to help by producing a reproducible and isolated benchmark or a > better analysis or ... so someone more knowledgeable than me on GHC > internals can someday hopefully attack the regression. Any pointers on what > would help and where can I learn to do it? > > > > Thanks! > > > > > > [1] https://gist.github.com/albertov/46fbb13d940f67a569f9a25c1cb8154c > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Mon Jun 12 03:35:20 2017 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 11 Jun 2017 23:35:20 -0400 Subject: Profiling plugins In-Reply-To: References: Message-ID: <1497238428-sup-6076@sabre> Hello M, Unfortunately, if you want detailed profiling, you will have to rebuild GHC with profiling. Note that you can basic heap profile information without rebuilding GHC. Edward Excerpts from M Farkas-Dyck's message of 2017-06-06 12:34:57 -0800: > How is this done? I am working on ConCat > [https://github.com/conal/concat] and we need a profile of the plugin > itself. I tried "stack test --profile" but that does a profile of the > test program, not the plugin. Can i do this and not rebuild GHC? From winterkoninkje at gmail.com Sun Jun 18 19:02:12 2017 From: winterkoninkje at gmail.com (wren romano) Date: Sun, 18 Jun 2017 12:02:12 -0700 Subject: Untouchable type variables In-Reply-To: <1493918453.8081.30.camel@acme.softbase.org> References: <1493918453.8081.30.camel@acme.softbase.org> Message-ID: On Thu, May 4, 2017 at 10:20 AM, Wolfgang Jeltsch wrote: > Today I encountered for the first time the notion of an “untouchable” > type variable. I have no clue what this is supposed to mean. Fwiw, "untouchable" variables come from existential quantification (since the variable must be held abstract so that it doesn't escape). More often we see these errors when using GADTs and TypeFamilies, since both of those often have existentials hiding under the hood in how they deal with indices. > A minimal > example that exposes my problem is the following: > >> {-# LANGUAGE Rank2Types, TypeFamilies #-} >> >> import GHC.Exts (Constraint) >> >> type family F a b :: Constraint >> >> data T b c = T >> >> f :: (forall b . F a b => T b c) -> a >> f _ = undefined FWIW, the error comes specifically from the fact that @F@ is a family. If you use a plain old type class, or if you use a type alias (via -XConstraintKinds) then it typechecks just fine. So it's something about how the arguments to @F@ are indices rather than parameters. I have a few guesses about why the families don't work here, but I'm not finding any of them particularly convincing. Really, imo, @c@ should be held abstract within the type of the argument, since it's universally quantified from outside. Whatever @F a b@ evaluates to can't possibly have an impact on @c@[1]. I'd file a bug report. If it's just an implementation defect, then the GHC devs will want to know. And if there's actually a type theoretic reason I missed, it'd be good to have that documented somewhere. [1] For three reasons combined: (1) @F a b@ can't see @c@, so the only effect evaluating @F a b@ could possibly have on @c@ is to communicate via some side channel, of which I only see two: (2) the @(a,c)@ from outside are quantified parametrically, thus nothing from the outside scope could cause information to flow from @a@ to @c@, (3) the @T@ is parametric in @(b,c)@ (since it is not a GADT) so it can't cause information to flow from @b@ to @c at . -- Live well, ~wren From g9ks157k at acme.softbase.org Sun Jun 18 19:30:12 2017 From: g9ks157k at acme.softbase.org (Wolfgang Jeltsch) Date: Sun, 18 Jun 2017 22:30:12 +0300 Subject: Untouchable type variables In-Reply-To: References: <1493918453.8081.30.camel@acme.softbase.org> Message-ID: <1497814212.15180.3.camel@acme.softbase.org> Am Sonntag, den 18.06.2017, 12:02 -0700 schrieb wren romano: > > > {-# LANGUAGE Rank2Types, TypeFamilies #-} > > > > > > import GHC.Exts (Constraint) > > > > > > type family F a b :: Constraint > > > > > > data T b c = T > > > > > > f :: (forall b . F a b => T b c) -> a > > > f _ = undefined > > FWIW, the error comes specifically from the fact that @F@ is a family. > If you use a plain old type class, or if you use a type alias (via > -XConstraintKinds) then it typechecks just fine. So it's something > about how the arguments to @F@ are indices rather than parameters. > > I have a few guesses about why the families don't work here, but I'm > not finding any of them particularly convincing. Really, imo, @c@ > should be held abstract within the type of the argument, since it's > universally quantified from outside. Whatever @F a b@ evaluates to > can't possibly have an impact on @c at . I'd file a bug report. If > it's just an implementation defect, then the GHC devs will want to > know. And if there's actually a type theoretic reason I missed, it'd > be good to have that documented somewhere. I already filed a bug report:     https://ghc.haskell.org/trac/ghc/ticket/13655 In a comment, Simon says that this behavior is according to the rules. I am just not sure whether the rules have to be as they are. All the best, Wolfgang From dan.doel at gmail.com Mon Jun 19 04:45:56 2017 From: dan.doel at gmail.com (Dan Doel) Date: Mon, 19 Jun 2017 00:45:56 -0400 Subject: Untouchable type variables In-Reply-To: References: <1493918453.8081.30.camel@acme.softbase.org> Message-ID: This doesn't sound like the right explanation to me. Untouchable variables don't have anything (necessarily) to do with existential quantification. What they have to do with is GHC's (equality) constraint solving. I don't completely understand the algorithm. However, from what I've read and seen of the way it works, I can tell you why you might see the error reported here.... When type checking moves under a 'fancy' context, all (not sure if it's actually all) variables made outside that context are rendered untouchable, and are not able to be unified with local variables inside the context. So the problem that is occurring is related to `c` being bound outside the 'fancy' context `F a b`, but used inside (and maybe not appearing in the fancy context is a factor). And `F a b` is fancy because GHC just has to assume the worst about type families (that don't reduce, anyway). Equality constraints are the fundamental 'fancy' context, I think. The more precise explanation is, of course, in the paper describing the current type checking algorithm. I don't recall the motivation, but they do have one. :) Maybe it's overly aggressive, but I really can't say myself. -- Dan On Sun, Jun 18, 2017 at 3:02 PM, wren romano wrote: > On Thu, May 4, 2017 at 10:20 AM, Wolfgang Jeltsch > wrote: > > Today I encountered for the first time the notion of an “untouchable” > > type variable. I have no clue what this is supposed to mean. > > Fwiw, "untouchable" variables come from existential quantification > (since the variable must be held abstract so that it doesn't escape). > More often we see these errors when using GADTs and TypeFamilies, > since both of those often have existentials hiding under the hood in > how they deal with indices. > > > > A minimal > > example that exposes my problem is the following: > > > >> {-# LANGUAGE Rank2Types, TypeFamilies #-} > >> > >> import GHC.Exts (Constraint) > >> > >> type family F a b :: Constraint > >> > >> data T b c = T > >> > >> f :: (forall b . F a b => T b c) -> a > >> f _ = undefined > > FWIW, the error comes specifically from the fact that @F@ is a family. > If you use a plain old type class, or if you use a type alias (via > -XConstraintKinds) then it typechecks just fine. So it's something > about how the arguments to @F@ are indices rather than parameters. > > I have a few guesses about why the families don't work here, but I'm > not finding any of them particularly convincing. Really, imo, @c@ > should be held abstract within the type of the argument, since it's > universally quantified from outside. Whatever @F a b@ evaluates to > can't possibly have an impact on @c@[1]. I'd file a bug report. If > it's just an implementation defect, then the GHC devs will want to > know. And if there's actually a type theoretic reason I missed, it'd > be good to have that documented somewhere. > > [1] For three reasons combined: (1) @F a b@ can't see @c@, so the only > effect evaluating @F a b@ could possibly have on @c@ is to communicate > via some side channel, of which I only see two: (2) the @(a,c)@ from > outside are quantified parametrically, thus nothing from the outside > scope could cause information to flow from @a@ to @c@, (3) the @T@ is > parametric in @(b,c)@ (since it is not a GADT) so it can't cause > information to flow from @b@ to @c at . > > -- > Live well, > ~wren > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clintonmead at gmail.com Mon Jun 19 05:12:39 2017 From: clintonmead at gmail.com (Clinton Mead) Date: Mon, 19 Jun 2017 15:12:39 +1000 Subject: Untouchable type variables In-Reply-To: References: <1493918453.8081.30.camel@acme.softbase.org> Message-ID: I'm no expert on type deduction, but it seems to me that GHC rejecting this is perfectly reasonable. Indeed GHC happy compiles it if you add an instance which can reduce like so: > type instance F a b = () But without that instance, that function is never callable. There could be a whole raft of instances like: > type instance F a Int = ... > type instance F a Double = ... etc. As F is open, it must always assume that there could be more instances added, any of which could make the function never callable. So it has to assumes it can never be called. Hence it throws an error, which we can quieten with ambiguous types, but even then "f" is not callable. On Mon, Jun 19, 2017 at 2:45 PM, Dan Doel wrote: > This doesn't sound like the right explanation to me. Untouchable variables > don't have anything (necessarily) to do with existential quantification. > What they have to do with is GHC's (equality) constraint solving. > > I don't completely understand the algorithm. However, from what I've read > and seen of the way it works, I can tell you why you might see the error > reported here.... > > When type checking moves under a 'fancy' context, all (not sure if it's > actually all) variables made outside that context are rendered untouchable, > and are not able to be unified with local variables inside the context. So > the problem that is occurring is related to `c` being bound outside the > 'fancy' context `F a b`, but used inside (and maybe not appearing in the > fancy context is a factor). And `F a b` is fancy because GHC just has to > assume the worst about type families (that don't reduce, anyway). Equality > constraints are the fundamental 'fancy' context, I think. > > The more precise explanation is, of course, in the paper describing the > current type checking algorithm. I don't recall the motivation, but they do > have one. :) Maybe it's overly aggressive, but I really can't say myself. > > -- Dan > > > On Sun, Jun 18, 2017 at 3:02 PM, wren romano > wrote: > >> On Thu, May 4, 2017 at 10:20 AM, Wolfgang Jeltsch >> wrote: >> > Today I encountered for the first time the notion of an “untouchable” >> > type variable. I have no clue what this is supposed to mean. >> >> Fwiw, "untouchable" variables come from existential quantification >> (since the variable must be held abstract so that it doesn't escape). >> More often we see these errors when using GADTs and TypeFamilies, >> since both of those often have existentials hiding under the hood in >> how they deal with indices. >> >> >> > A minimal >> > example that exposes my problem is the following: >> > >> >> {-# LANGUAGE Rank2Types, TypeFamilies #-} >> >> >> >> import GHC.Exts (Constraint) >> >> >> >> type family F a b :: Constraint >> >> >> >> data T b c = T >> >> >> >> f :: (forall b . F a b => T b c) -> a >> >> f _ = undefined >> >> FWIW, the error comes specifically from the fact that @F@ is a family. >> If you use a plain old type class, or if you use a type alias (via >> -XConstraintKinds) then it typechecks just fine. So it's something >> about how the arguments to @F@ are indices rather than parameters. >> >> I have a few guesses about why the families don't work here, but I'm >> not finding any of them particularly convincing. Really, imo, @c@ >> should be held abstract within the type of the argument, since it's >> universally quantified from outside. Whatever @F a b@ evaluates to >> can't possibly have an impact on @c@[1]. I'd file a bug report. If >> it's just an implementation defect, then the GHC devs will want to >> know. And if there's actually a type theoretic reason I missed, it'd >> be good to have that documented somewhere. >> >> [1] For three reasons combined: (1) @F a b@ can't see @c@, so the only >> effect evaluating @F a b@ could possibly have on @c@ is to communicate >> via some side channel, of which I only see two: (2) the @(a,c)@ from >> outside are quantified parametrically, thus nothing from the outside >> scope could cause information to flow from @a@ to @c@, (3) the @T@ is >> parametric in @(b,c)@ (since it is not a GADT) so it can't cause >> information to flow from @b@ to @c at . >> >> -- >> Live well, >> ~wren >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users >> > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony_clayden at clear.net.nz Sat Jun 24 00:44:30 2017 From: anthony_clayden at clear.net.nz (Anthony Clayden) Date: Sat, 24 Jun 2017 12:44:30 +1200 Subject: Proposal: Instance apartness guards Message-ID: <594db5ee.312.4d.22425@clear.net.nz> After years of pondering this idea (in various forms), and several rounds of discussion on several forums, I've written it up. "This proposal tackles the thorny topic of Overlapping instances, for both type classes and Type Families/Associated types, by annotating instance heads with type-level apartness Guards. Type-level disequality predicates appear in Sulzmann & Stuckey 2002; in the type-level ‘case selection’ in HList 2004; and in various guises in Haskell cafe discussions in following years. This proposal builds on the apartness testing implemented as part of the Closed Type Families work." All feedback welcome. https://github.com/AntC2/ghc-proposals/blob/instance-apartness-guards/proposals/0000-instance-apartness-guards.rst AntC From ch.martin at gmail.com Mon Jun 26 17:17:11 2017 From: ch.martin at gmail.com (Chris Martin) Date: Mon, 26 Jun 2017 17:17:11 +0000 Subject: Contradictions about DeriveDataTypeable in the manual? Message-ID: Exhibit A: > With -XDeriveDataTypeable, you can derive instances of the class Data, defined in Data.Data. See "Deriving Typeable instances" for deriving Typeable. Exhibit B: > -XDeriveDataTypeable > Enable automatic deriving of instances for the Typeable typeclass Exhibit C: > Derived instances of Typeable are ignored, and may be reported as an error in a later version of the compiler. ---------------- A and B seem contradictory: Is this extension for deriving Data, or for deriving Typeable? B and C seem... not technically contradictory, but why is there an extension that enables the deriving of instances that will just be ignored? Is this extension meant to be deprecated? >From experimentation, it seems like all types automatically get Typeable instances whether you declare it or not. Is that accurate? -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Jun 26 18:05:31 2017 From: david.feuer at gmail.com (David Feuer) Date: Mon, 26 Jun 2017 14:05:31 -0400 Subject: Contradictions about DeriveDataTypeable in the manual? In-Reply-To: References: Message-ID: In the old days, DeriveDataTypeable enabled deriving both Data and Typeable. As of a fairly recent GHC version (7.10? 8.0?), Typeable instances are indeed derived automatically for all types that can get such instances, so DeriveDataTypeable is only used for deriving Data instances. I can't say whether it will ever be an error to write an explicit `deriving Typeable` clause, but I don't see much point in making it one. Unrelatedly, 8.2 has a complete overhaul of Typeable that you should take a look at if you're interested in the class. The new Type.Reflection API is much more powerful than the old Data.Typeable one. On Mon, Jun 26, 2017 at 1:17 PM, Chris Martin wrote: > Exhibit A: > >> With -XDeriveDataTypeable, you can derive instances of the class Data, >> defined in Data.Data. See "Deriving Typeable instances" for deriving >> Typeable. > > Exhibit B: > >> -XDeriveDataTypeable >> Enable automatic deriving of instances for the Typeable typeclass > > Exhibit C: > >> Derived instances of Typeable are ignored, and may be reported as an error >> in a later version of the compiler. > > ---------------- > > A and B seem contradictory: Is this extension for deriving Data, or for > deriving Typeable? > > B and C seem... not technically contradictory, but why is there an extension > that enables the deriving of instances that will just be ignored? > > Is this extension meant to be deprecated? > > From experimentation, it seems like all types automatically get Typeable > instances whether you declare it or not. Is that accurate? > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > From ch.martin at gmail.com Mon Jun 26 19:11:28 2017 From: ch.martin at gmail.com (Chris Martin) Date: Mon, 26 Jun 2017 19:11:28 +0000 Subject: Contradictions about DeriveDataTypeable in the manual? In-Reply-To: References: Message-ID: Thanks. I've opened a PR on GitHub to update the docs: https://github.com/ghc/ghc/pull/48 On Mon, Jun 26, 2017 at 2:05 PM David Feuer wrote: > In the old days, DeriveDataTypeable enabled deriving both Data and > Typeable. As of a fairly recent GHC version (7.10? 8.0?), Typeable > instances are indeed derived automatically for all types that can get > such instances, so DeriveDataTypeable is only used for deriving Data > instances. I can't say whether it will ever be an error to write an > explicit `deriving Typeable` clause, but I don't see much point in > making it one. > > Unrelatedly, 8.2 has a complete overhaul of Typeable that you should > take a look at if you're interested in the class. The new > Type.Reflection API is much more powerful than the old Data.Typeable > one. > > On Mon, Jun 26, 2017 at 1:17 PM, Chris Martin wrote: > > Exhibit A: > > > >> With -XDeriveDataTypeable, you can derive instances of the class Data, > >> defined in Data.Data. See "Deriving Typeable instances" for deriving > >> Typeable. > > > > Exhibit B: > > > >> -XDeriveDataTypeable > >> Enable automatic deriving of instances for the Typeable typeclass > > > > Exhibit C: > > > >> Derived instances of Typeable are ignored, and may be reported as an > error > >> in a later version of the compiler. > > > > ---------------- > > > > A and B seem contradictory: Is this extension for deriving Data, or for > > deriving Typeable? > > > > B and C seem... not technically contradictory, but why is there an > extension > > that enables the deriving of instances that will just be ignored? > > > > Is this extension meant to be deprecated? > > > > From experimentation, it seems like all types automatically get Typeable > > instances whether you declare it or not. Is that accurate? > > > > _______________________________________________ > > Glasgow-haskell-users mailing list > > Glasgow-haskell-users at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: