From lexi.lambda at gmail.com Thu Aug 1 06:54:41 2019 From: lexi.lambda at gmail.com (Alexis King) Date: Thu, 1 Aug 2019 01:54:41 -0500 Subject: Properly writing typechecker plugins Message-ID: <55E0D054-1AF7-4611-A6DD-9B84A43FBC51@gmail.com> Hi all, I have recently decided to try writing a GHC typechecker plugin so I can get my hands on some extra operations on type-level strings. My plugin works, but only sort of—I know some things about it are plain wrong, and I have a sneaking suspicion that plenty of other things are not handled properly. First, some context about what I do and don’t already know: I have a high-level understanding of the basic concepts behind GHC’s solver. I understand what evidence is and what purpose it serves, I mostly understand the different flavors of constraints, and I think I have a decent grasp on some of the operational details of how individual passes work. I’ve spent a little time reading through comments in the GHC source code, along with pieces of the source code itself, but I’m sure my understanding is pretty patchy. With that out of the way, here are my questions: First, I’m trying to understand: why are wanted constraints passed to typechecker plugins unflattened? This is my single biggest point of confusion. It certainly seems like the opposite of what I want. Consider that I have a type family type family ToUpper (s :: Symbol) :: Symbol where {} that I wish to solve in my plugin. At first, I just naïvely looked through the bag of wanted constraints and looked for constraints of the shape t ~ ToUpper s but this isn’t good enough, since my plugin regularly receives constraints that look more like t ~ SomeOtherTypeFamily (ToUpper s) so I have to recursively grovel through every type equality constraint looking for an application of a family I care about. Furthermore, once I’ve found one, I’m not sure how to actually let GHC know that I’ve solved it—do I really have to just generate a new given constraint and let GHC’s solver connect the dots? I have seen the note on the typechecker plugins wiki page about possibly baking support for type families into the plugin interface, which would indeed be nicer than the status quo, but it seems odd to me that they aren’t just passed to plugins flattened, which seems like it would spare a lot of effort. Isn’t the flattened representation really what typechecker plugins would like to see, anyway? But let’s put families aside for a moment. I’m not just solving type families in my plugin, I’m also solving classes. These classes have no methods, but they do have functional dependencies. For example, I have a class class Append (a :: Symbol) (b :: Symbol) (c :: Symbol) | a b -> c, a c -> b, b c -> a which is like GHC.TypeLits.AppendSymbol, but the fundeps make GHC a bit happier when running it “backwards” (since GHC doesn’t really know about AppendSymbol’s magical injectivity, so it sometimes complains). In any case, I was hoping that GHC’s solver would handle the improvement afforded by the fundeps for me once I provided evidence for Append constraints, but that doesn’t seem to be the case. Currently, I am therefore manually generating derived constraints based on the functional dependency information, plumbing FunDepOrigin2 through and all. Is there some way to cooperate better with GHC’s solver so I don’t have to duplicate all that logic in my plugin? I guess one thing I didn’t try is returning given constraints from my solver instead of just solving them and providing evidence. That is, if my plugin received a [W] d1 :: Append "foo" "bar" c constraint, then instead of solving the constraint directly, I could leave it alone and instead return a new constraint [G] d2 :: Append "foo" "bar" "baz" and presumably GHC’s solver would use that constraint to improve and solve d1. But similar to my confusion about type families above, I’m uncertain if that’s the intended method or not, since it seems like it’s sort of circumventing the plugin API. Finally, on a related note, building evidence for these solver-generated typeclass instances is a bit of a pain. They have no methods, but they do sometimes have superclasses. Currently, I’ve been generating CoreExprs as carefully as I’m able to after reading through the desugaring code: I call dataConWrapId on the result of classDataCon, then use mkTyConApp and mkCoreApps on that directly. It seems to work okay now, but it didn’t always: -dcore-lint thankfully caught my mistakes, but I’ve been wondering if there’s a safer way to build the dictionary that I’ve been missing. That’s it for now—I’ve just been muddling through until things work. Once I get something that feels closer to right, maybe I’ll put the code somewhere and ask for more low-level comments if anyone would like to take the time to offer them, but for now, I’m still working on the high-level ideas. The wiki pages I’ve found have been very helpful; my appreciation to all who have contributed to them! Many thanks, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 1 08:01:58 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 1 Aug 2019 08:01:58 +0000 Subject: Properly writing typechecker plugins In-Reply-To: <55E0D054-1AF7-4611-A6DD-9B84A43FBC51@gmail.com> References: <55E0D054-1AF7-4611-A6DD-9B84A43FBC51@gmail.com> Message-ID: Alexis Thanks for writing this up so carefully. I hope that others will join in. And please then put the distilled thought onto the wiki page(s) so they are not lost. Some quick thoughts from me: * Flattening. I’m pretty sure we pass constraints unflattened because that’s what someone wanted at the time. It could easily be changed, but it might complicate the API. E.g. you might reasonably want to know the mapping from type variable to function application. There is no fundameental obstacle. * Letting the plugin add given constraints. This looks a bit like: let the plugin prove lemmas and hand them back to GHC (along with their proof) to exploit. Yes, that seems reasonable too. Again, something new in the API. I don’t understand enough of your type-class instance question to comment meaningfully, but perhaps others will. Nothing about the plugin interface is cast in stone. There are quite a few “customers” but few enough that they’ll probably be happy to adapt to changes. Go for it, in consultation with them! Simon From: ghc-devs On Behalf Of Alexis King Sent: 01 August 2019 07:55 To: ghc-devs at haskell.org Subject: Properly writing typechecker plugins Hi all, I have recently decided to try writing a GHC typechecker plugin so I can get my hands on some extra operations on type-level strings. My plugin works, but only sort of—I know some things about it are plain wrong, and I have a sneaking suspicion that plenty of other things are not handled properly. First, some context about what I do and don’t already know: I have a high-level understanding of the basic concepts behind GHC’s solver. I understand what evidence is and what purpose it serves, I mostly understand the different flavors of constraints, and I think I have a decent grasp on some of the operational details of how individual passes work. I’ve spent a little time reading through comments in the GHC source code, along with pieces of the source code itself, but I’m sure my understanding is pretty patchy. With that out of the way, here are my questions: 1. First, I’m trying to understand: why are wanted constraints passed to typechecker plugins unflattened? This is my single biggest point of confusion. It certainly seems like the opposite of what I want. Consider that I have a type family type family ToUpper (s :: Symbol) :: Symbol where {} that I wish to solve in my plugin. At first, I just naïvely looked through the bag of wanted constraints and looked for constraints of the shape t ~ ToUpper s but this isn’t good enough, since my plugin regularly receives constraints that look more like t ~ SomeOtherTypeFamily (ToUpper s) so I have to recursively grovel through every type equality constraint looking for an application of a family I care about. Furthermore, once I’ve found one, I’m not sure how to actually let GHC know that I’ve solved it—do I really have to just generate a new given constraint and let GHC’s solver connect the dots? I have seen the note on the typechecker plugins wiki page about possibly baking support for type families into the plugin interface, which would indeed be nicer than the status quo, but it seems odd to me that they aren’t just passed to plugins flattened, which seems like it would spare a lot of effort. Isn’t the flattened representation really what typechecker plugins would like to see, anyway? 2. But let’s put families aside for a moment. I’m not just solving type families in my plugin, I’m also solving classes. These classes have no methods, but they do have functional dependencies. For example, I have a class class Append (a :: Symbol) (b :: Symbol) (c :: Symbol) | a b -> c, a c -> b, b c -> a which is like GHC.TypeLits.AppendSymbol, but the fundeps make GHC a bit happier when running it “backwards” (since GHC doesn’t really know about AppendSymbol’s magical injectivity, so it sometimes complains). In any case, I was hoping that GHC’s solver would handle the improvement afforded by the fundeps for me once I provided evidence for Append constraints, but that doesn’t seem to be the case. Currently, I am therefore manually generating derived constraints based on the functional dependency information, plumbing FunDepOrigin2 through and all. Is there some way to cooperate better with GHC’s solver so I don’t have to duplicate all that logic in my plugin? I guess one thing I didn’t try is returning given constraints from my solver instead of just solving them and providing evidence. That is, if my plugin received a [W] d1 :: Append "foo" "bar" c constraint, then instead of solving the constraint directly, I could leave it alone and instead return a new constraint [G] d2 :: Append "foo" "bar" "baz" and presumably GHC’s solver would use that constraint to improve and solve d1. But similar to my confusion about type families above, I’m uncertain if that’s the intended method or not, since it seems like it’s sort of circumventing the plugin API. 3. Finally, on a related note, building evidence for these solver-generated typeclass instances is a bit of a pain. They have no methods, but they do sometimes have superclasses. Currently, I’ve been generating CoreExprs as carefully as I’m able to after reading through the desugaring code: I call dataConWrapId on the result of classDataCon, then use mkTyConApp and mkCoreApps on that directly. It seems to work okay now, but it didn’t always: -dcore-lint thankfully caught my mistakes, but I’ve been wondering if there’s a safer way to build the dictionary that I’ve been missing. That’s it for now—I’ve just been muddling through until things work. Once I get something that feels closer to right, maybe I’ll put the code somewhere and ask for more low-level comments if anyone would like to take the time to offer them, but for now, I’m still working on the high-level ideas. The wiki pages I’ve found have been very helpful; my appreciation to all who have contributed to them! Many thanks, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu Aug 1 08:26:59 2019 From: ben at well-typed.com (Ben Gamari) Date: Thu, 01 Aug 2019 04:26:59 -0400 Subject: GitLab upgraded Message-ID: <87ftmll29e.fsf@smart-cactus.org> Hi everyone, Yesterday I upgraded GitLab to 12.1 and added a few patches specific to the GHC instance. There are a few effects: 1. issue and merge request numbers are now the first thing displayed in the page title of merge request and issue pages. 2. issue and merge request numbers are also shown at the beginning of email subject lines. 3. due to an apparent regression, GitLab's CI configuration validator started rejecting our .gitlab-ci.yml. I have fixed this in 5e04841c4641e2249066614065053166657e3eb4. If you see mysterious "yaml invalid" errors in your CI results you should try rebasing on to current `master`. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From lexi.lambda at gmail.com Fri Aug 2 04:30:52 2019 From: lexi.lambda at gmail.com (Alexis King) Date: Thu, 1 Aug 2019 23:30:52 -0500 Subject: Properly writing typechecker plugins In-Reply-To: References: <55E0D054-1AF7-4611-A6DD-9B84A43FBC51@gmail.com> Message-ID: <1E3BE9D1-3D00-4BEB-810D-6A60D1F1424C@gmail.com> My thanks to you, Simon, for your prompt response. After reading your message, I realized I perhaps read too far into this sentence on the wiki page: > Defining type families in plugins is more work than it needs to be, because the current interface forces the plugin to search the unsolved constraints for the type family in question (which might be anywhere within the types), then emit a new given constraint to reduce the type family. Emphasis mine. Although that sentence seems to imply that adding given constraints from a typechecker plugin is both possible and sanctioned, your message, my experimentation, and the GHC source code all seem to agree that is not true, after all. After realizing that was a dead end, I puzzled for some time on how one is intended to solve nested type families after all, only to realize that a proper solution is simpler than I had realized. To make things more concrete, I was confused about how my plugin might solve a constraint like [W] {co_0} :: F (ToLower "Foo") ~# Length "bar" if it knows about ToLower and Length but nothing about F. The solution is actually extremely straightforward to implement, but the idea was not at all obvious to me at first. Namely, it’s possible to just recursively walk the whole type and solve any known families bottom-up, producing a new constraint: [W] {co_1} :: F "foo" ~# 3 At the same time, to construct evidence for the first constraint, the recursive function can also build a coercion as it goes, producing co_2 = (F (Univ :: "foo" ~ ToLower "Foo")) ~# (Univ :: 3 ~ Length "bar") which can be used to cast the evidence for the new constraint to evidence for the old one: co_0 := co_2 ; {co_1} That all seems to be working well, and it’s much nicer than whatever it was I was trying before. There’s one small wrinkle I’ve bumped into, however, which was preventing the solver from terminating. Specifically, typechecker plugins do not seem to be able to solve derived constraints. To elaborate, my plugin was getting called with the constraint [D] (Length "f" ~ 1), which it was happily turning into [D] (1 ~ 1), but the derived constraint was never removed from the inert set, and it would produce new [D] (1 ~ 1) constraints forever. I went looking in the GHC source, only to discover that you, Simon, are apparently deeply suspicious of typechecker plugins that solve derived constraints! I don’t know if there’s a reason behind that—maybe I’m going about things the wrong way, and I shouldn’t need to solve those derived constraints—but in the meantime, I added some kludgey mutable state to keep track of the derived constraints my plugin has already attempted to solve so it won’t try to solve them again. Anyway, that aside, if anyone finds what I’ve written in this email helpful, I can certainly flesh it out a bit more and stick it in a wiki page somewhere. Though, for what it’s worth, I think it would be even more helpful if some of the relevant information made its way into the GHC user’s guide, since I found that more discoverable than the wiki page (which took a comparatively large amount of digging to locate). Thanks again, Alexis > On Aug 1, 2019, at 03:01, Simon Peyton Jones wrote: > > Alexis > > Thanks for writing this up so carefully. I hope that others will join in. And please then put the distilled thought onto the wiki page(s) so they are not lost. > > Some quick thoughts from me: > > Flattening. I’m pretty sure we pass constraints unflattened because that’s what someone wanted at the time. It could easily be changed, but it might complicate the API. E.g. you might reasonably want to know the mapping from type variable to function application. There is no fundameental obstacle. > > Letting the plugin add given constraints. This looks a bit like: let the plugin prove lemmas and hand them back to GHC (along with their proof) to exploit. Yes, that seems reasonable too. Again, something new in the API. > > I don’t understand enough of your type-class instance question to comment meaningfully, but perhaps others will. > > Nothing about the plugin interface is cast in stone. There are quite a few “customers” but few enough that they’ll probably be happy to adapt to changes. Go for it, in consultation with them! > > Simon > > > > From: ghc-devs On Behalf Of Alexis King > Sent: 01 August 2019 07:55 > To: ghc-devs at haskell.org > Subject: Properly writing typechecker plugins > > Hi all, > > I have recently decided to try writing a GHC typechecker plugin so I can get my hands on some extra operations on type-level strings. My plugin works, but only sort of—I know some things about it are plain wrong, and I have a sneaking suspicion that plenty of other things are not handled properly. > > First, some context about what I do and don’t already know: I have a high-level understanding of the basic concepts behind GHC’s solver. I understand what evidence is and what purpose it serves, I mostly understand the different flavors of constraints, and I think I have a decent grasp on some of the operational details of how individual passes work. I’ve spent a little time reading through comments in the GHC source code, along with pieces of the source code itself, but I’m sure my understanding is pretty patchy. > > With that out of the way, here are my questions: > > First, I’m trying to understand: why are wanted constraints passed to typechecker plugins unflattened? This is my single biggest point of confusion. It certainly seems like the opposite of what I want. Consider that I have a type family > > type family ToUpper (s :: Symbol) :: Symbol where {} > > that I wish to solve in my plugin. At first, I just naïvely looked through the bag of wanted constraints and looked for constraints of the shape > > t ~ ToUpper s > > but this isn’t good enough, since my plugin regularly receives constraints that look more like > > t ~ SomeOtherTypeFamily (ToUpper s) > > so I have to recursively grovel through every type equality constraint looking for an application of a family I care about. Furthermore, once I’ve found one, I’m not sure how to actually let GHC know that I’ve solved it—do I really have to just generate a new given constraint and let GHC’s solver connect the dots? > > I have seen the note on the typechecker plugins wiki page about possibly baking support for type families into the plugin interface, which would indeed be nicer than the status quo, but it seems odd to me that they aren’t just passed to plugins flattened, which seems like it would spare a lot of effort. Isn’t the flattened representation really what typechecker plugins would like to see, anyway? > But let’s put families aside for a moment. I’m not just solving type families in my plugin, I’m also solving classes. These classes have no methods, but they do have functional dependencies. For example, I have a class > > class Append (a :: Symbol) (b :: Symbol) (c :: Symbol) | a b -> c, a c -> b, b c -> a > > which is like GHC.TypeLits.AppendSymbol, but the fundeps make GHC a bit happier when running it “backwards” (since GHC doesn’t really know about AppendSymbol’s magical injectivity, so it sometimes complains). > > In any case, I was hoping that GHC’s solver would handle the improvement afforded by the fundeps for me once I provided evidence for Append constraints, but that doesn’t seem to be the case. Currently, I am therefore manually generating derived constraints based on the functional dependency information, plumbing FunDepOrigin2 through and all. Is there some way to cooperate better with GHC’s solver so I don’t have to duplicate all that logic in my plugin? > > I guess one thing I didn’t try is returning given constraints from my solver instead of just solving them and providing evidence. That is, if my plugin received a > > [W] d1 :: Append "foo" "bar" c > > constraint, then instead of solving the constraint directly, I could leave it alone and instead return a new constraint > > [G] d2 :: Append "foo" "bar" "baz" > > and presumably GHC’s solver would use that constraint to improve and solve d1. But similar to my confusion about type families above, I’m uncertain if that’s the intended method or not, since it seems like it’s sort of circumventing the plugin API. > Finally, on a related note, building evidence for these solver-generated typeclass instances is a bit of a pain. They have no methods, but they do sometimes have superclasses. Currently, I’ve been generating CoreExprs as carefully as I’m able to after reading through the desugaring code: I call dataConWrapId on the result of classDataCon, then use mkTyConApp and mkCoreApps on that directly. It seems to work okay now, but it didn’t always: -dcore-lint thankfully caught my mistakes, but I’ve been wondering if there’s a safer way to build the dictionary that I’ve been missing. > > That’s it for now—I’ve just been muddling through until things work. Once I get something that feels closer to right, maybe I’ll put the code somewhere and ask for more low-level comments if anyone would like to take the time to offer them, but for now, I’m still working on the high-level ideas. The wiki pages I’ve found have been very helpful; my appreciation to all who have contributed to them! > > Many thanks, > Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Aug 2 07:57:55 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 2 Aug 2019 07:57:55 +0000 Subject: Properly writing typechecker plugins In-Reply-To: <1E3BE9D1-3D00-4BEB-810D-6A60D1F1424C@gmail.com> References: <55E0D054-1AF7-4611-A6DD-9B84A43FBC51@gmail.com> <1E3BE9D1-3D00-4BEB-810D-6A60D1F1424C@gmail.com> Message-ID: . I went looking in the GHC source, only to discover that you, Simon, are apparently deeply suspicious of typechecker plugins that solve derived constraints! What a good thing I left _some_ breadcrumbs to follow! I think my suspicion were about two separate matters * We should not need to delete solved _givens_ from the inert set. We can augment givens with extra facts, but deleting them seems wrong. * There should be no Derived constraints in the inert set anyway. They should all be in the WantedConstraints passed to runTcPluginsWanted. They were extracted from the inert set, along with the Deriveds, by getUnsolvedInerts in solve_simple_wanteds I can’t account for what’s happening to your Deriveds, but if you do some more detective work, I’m happy to play consultant. Though, for what it’s worth, I think it would be even more helpful if some of the relevant information made its way into the GHC user’s guide, since I found that more discoverable than the wiki page (which took a comparatively large amount of digging to locate). By definition you are better placed that any of us to know where to put this info -- after all, you know where you looked! It would be a great service to everyone if you wrote a new chapter for the user guide, drawing on material that already exists -- and then signposted that new chapter from the existing wiki page. Adding signposts to the wiki page too would be helpful, since it was hard to find. Thanks! Simion From: Alexis King Sent: 02 August 2019 05:31 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Properly writing typechecker plugins My thanks to you, Simon, for your prompt response. After reading your message, I realized I perhaps read too far into this sentence on the wiki page: Defining type families in plugins is more work than it needs to be, because the current interface forces the plugin to search the unsolved constraints for the type family in question (which might be anywhere within the types), then emit a new given constraint to reduce the type family. Emphasis mine. Although that sentence seems to imply that adding given constraints from a typechecker plugin is both possible and sanctioned, your message, my experimentation, and the GHC source code all seem to agree that is not true, after all. After realizing that was a dead end, I puzzled for some time on how one is intended to solve nested type families after all, only to realize that a proper solution is simpler than I had realized. To make things more concrete, I was confused about how my plugin might solve a constraint like [W] {co_0} :: F (ToLower "Foo") ~# Length "bar" if it knows about ToLower and Length but nothing about F. The solution is actually extremely straightforward to implement, but the idea was not at all obvious to me at first. Namely, it’s possible to just recursively walk the whole type and solve any known families bottom-up, producing a new constraint: [W] {co_1} :: F "foo" ~# 3 At the same time, to construct evidence for the first constraint, the recursive function can also build a coercion as it goes, producing co_2 = (F (Univ :: "foo" ~ ToLower "Foo")) ~# (Univ :: 3 ~ Length "bar") which can be used to cast the evidence for the new constraint to evidence for the old one: co_0 := co_2 ; {co_1} That all seems to be working well, and it’s much nicer than whatever it was I was trying before. There’s one small wrinkle I’ve bumped into, however, which was preventing the solver from terminating. Specifically, typechecker plugins do not seem to be able to solve derived constraints. To elaborate, my plugin was getting called with the constraint [D] (Length "f" ~ 1), which it was happily turning into [D] (1 ~ 1), but the derived constraint was never removed from the inert set, and it would produce new [D] (1 ~ 1) constraints forever. I went looking in the GHC source, only to discover that you, Simon, are apparently deeply suspicious of typechecker plugins that solve derived constraints! I don’t know if there’s a reason behind that—maybe I’m going about things the wrong way, and I shouldn’t need to solve those derived constraints—but in the meantime, I added some kludgey mutable state to keep track of the derived constraints my plugin has already attempted to solve so it won’t try to solve them again. Anyway, that aside, if anyone finds what I’ve written in this email helpful, I can certainly flesh it out a bit more and stick it in a wiki page somewhere. Though, for what it’s worth, I think it would be even more helpful if some of the relevant information made its way into the GHC user’s guide, since I found that more discoverable than the wiki page (which took a comparatively large amount of digging to locate). Thanks again, Alexis On Aug 1, 2019, at 03:01, Simon Peyton Jones > wrote: Alexis Thanks for writing this up so carefully. I hope that others will join in. And please then put the distilled thought onto the wiki page(s) so they are not lost. Some quick thoughts from me: * Flattening. I’m pretty sure we pass constraints unflattened because that’s what someone wanted at the time. It could easily be changed, but it might complicate the API. E.g. you might reasonably want to know the mapping from type variable to function application. There is no fundameental obstacle. * Letting the plugin add given constraints. This looks a bit like: let the plugin prove lemmas and hand them back to GHC (along with their proof) to exploit. Yes, that seems reasonable too. Again, something new in the API. I don’t understand enough of your type-class instance question to comment meaningfully, but perhaps others will. Nothing about the plugin interface is cast in stone. There are quite a few “customers” but few enough that they’ll probably be happy to adapt to changes. Go for it, in consultation with them! Simon From: ghc-devs > On Behalf Of Alexis King Sent: 01 August 2019 07:55 To: ghc-devs at haskell.org Subject: Properly writing typechecker plugins Hi all, I have recently decided to try writing a GHC typechecker plugin so I can get my hands on some extra operations on type-level strings. My plugin works, but only sort of—I know some things about it are plain wrong, and I have a sneaking suspicion that plenty of other things are not handled properly. First, some context about what I do and don’t already know: I have a high-level understanding of the basic concepts behind GHC’s solver. I understand what evidence is and what purpose it serves, I mostly understand the different flavors of constraints, and I think I have a decent grasp on some of the operational details of how individual passes work. I’ve spent a little time reading through comments in the GHC source code, along with pieces of the source code itself, but I’m sure my understanding is pretty patchy. With that out of the way, here are my questions: 1. First, I’m trying to understand: why are wanted constraints passed to typechecker plugins unflattened? This is my single biggest point of confusion. It certainly seems like the opposite of what I want. Consider that I have a type family type family ToUpper (s :: Symbol) :: Symbol where {} that I wish to solve in my plugin. At first, I just naïvely looked through the bag of wanted constraints and looked for constraints of the shape t ~ ToUpper s but this isn’t good enough, since my plugin regularly receives constraints that look more like t ~ SomeOtherTypeFamily (ToUpper s) so I have to recursively grovel through every type equality constraint looking for an application of a family I care about. Furthermore, once I’ve found one, I’m not sure how to actually let GHC know that I’ve solved it—do I really have to just generate a new given constraint and let GHC’s solver connect the dots? I have seen the note on the typechecker plugins wiki page about possibly baking support for type families into the plugin interface, which would indeed be nicer than the status quo, but it seems odd to me that they aren’t just passed to plugins flattened, which seems like it would spare a lot of effort. Isn’t the flattened representation really what typechecker plugins would like to see, anyway? 2. But let’s put families aside for a moment. I’m not just solving type families in my plugin, I’m also solving classes. These classes have no methods, but they do have functional dependencies. For example, I have a class class Append (a :: Symbol) (b :: Symbol) (c :: Symbol) | a b -> c, a c -> b, b c -> a which is like GHC.TypeLits.AppendSymbol, but the fundeps make GHC a bit happier when running it “backwards” (since GHC doesn’t really know about AppendSymbol’s magical injectivity, so it sometimes complains). In any case, I was hoping that GHC’s solver would handle the improvement afforded by the fundeps for me once I provided evidence for Append constraints, but that doesn’t seem to be the case. Currently, I am therefore manually generating derived constraints based on the functional dependency information, plumbing FunDepOrigin2 through and all. Is there some way to cooperate better with GHC’s solver so I don’t have to duplicate all that logic in my plugin? I guess one thing I didn’t try is returning given constraints from my solver instead of just solving them and providing evidence. That is, if my plugin received a [W] d1 :: Append "foo" "bar" c constraint, then instead of solving the constraint directly, I could leave it alone and instead return a new constraint [G] d2 :: Append "foo" "bar" "baz" and presumably GHC’s solver would use that constraint to improve and solve d1. But similar to my confusion about type families above, I’m uncertain if that’s the intended method or not, since it seems like it’s sort of circumventing the plugin API. 3. Finally, on a related note, building evidence for these solver-generated typeclass instances is a bit of a pain. They have no methods, but they do sometimes have superclasses. Currently, I’ve been generating CoreExprs as carefully as I’m able to after reading through the desugaring code: I call dataConWrapId on the result of classDataCon, then use mkTyConApp and mkCoreApps on that directly. It seems to work okay now, but it didn’t always: -dcore-lint thankfully caught my mistakes, but I’ve been wondering if there’s a safer way to build the dictionary that I’ve been missing. That’s it for now—I’ve just been muddling through until things work. Once I get something that feels closer to right, maybe I’ll put the code somewhere and ask for more low-level comments if anyone would like to take the time to offer them, but for now, I’m still working on the high-level ideas. The wiki pages I’ve found have been very helpful; my appreciation to all who have contributed to them! Many thanks, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Fri Aug 2 19:46:41 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Fri, 02 Aug 2019 20:46:41 +0100 Subject: api to access .hi files In-Reply-To: <8736irnpsw.fsf@gmail.com> References: <8736irnpsw.fsf@gmail.com> Message-ID: <87sgqj4afy.fsf@gmail.com> To answer my own question with a solution and another question: Sam Halliday writes: > I'm mostly interested in gathering information about symbols and their > type signatures. As a first exercise: given a module+import section > for a haskell source file, I want to find out which symbols (and their > types) are available. Like :browse in ghci, but programmatically. This is answered by Stephen Diehl's blog post on the ghc api! How lucky I am: http://www.stephendiehl.com/posts/ghc_01.html He points to getNamesInScope Unfortunately I'm getting zero Names back when loading a file that imports several modules from ghc. Is there something I'm missing in the following? module Main where import Control.Monad import Control.Monad.IO.Class import GHC import GHC.Paths (libdir) main = runGhc (Just libdir) $ do dflags <- getSessionDynFlags void $ setSessionDynFlags $ dflags { hscTarget = HscInterpreted , ghcLink = LinkInMemory } addTarget $ Target (TargetFile "exe/Main.hs" Nothing) False Nothing res <- load LoadAllTargets liftIO $ putStrLn $ showPpr dflags res names <- getNamesInScope liftIO $ putStrLn $ "seen " <> (show $ length names) <> " Names" -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From allbery.b at gmail.com Fri Aug 2 19:52:24 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 2 Aug 2019 15:52:24 -0400 Subject: api to access .hi files In-Reply-To: <87sgqj4afy.fsf@gmail.com> References: <8736irnpsw.fsf@gmail.com> <87sgqj4afy.fsf@gmail.com> Message-ID: At a guess, because the ghc package defaults to being hidden (it's creating a new ghc instance at runtime, so the visibility of the ghc package when compiling your code is not relevant) you need to do the ghc-api equivalent of "-package ghc". Or for testing just "ghc-pkg expose ghc". On Fri, Aug 2, 2019 at 3:47 PM Sam Halliday wrote: > To answer my own question with a solution and another question: > > Sam Halliday writes: > > I'm mostly interested in gathering information about symbols and their > > type signatures. As a first exercise: given a module+import section > > for a haskell source file, I want to find out which symbols (and their > > types) are available. Like :browse in ghci, but programmatically. > > This is answered by Stephen Diehl's blog post on the ghc api! How lucky > I am: http://www.stephendiehl.com/posts/ghc_01.html > > He points to getNamesInScope > > Unfortunately I'm getting zero Names back when loading a file that > imports several modules from ghc. Is there something I'm missing in the > following? > > module Main where > > import Control.Monad > import Control.Monad.IO.Class > import GHC > import GHC.Paths (libdir) > > main = runGhc (Just libdir) $ do > dflags <- getSessionDynFlags > void $ setSessionDynFlags $ dflags { > hscTarget = HscInterpreted > , ghcLink = LinkInMemory > } > addTarget $ Target (TargetFile "exe/Main.hs" Nothing) False Nothing > res <- load LoadAllTargets > liftIO $ putStrLn $ showPpr dflags res > names <- getNamesInScope > liftIO $ putStrLn $ "seen " <> (show $ length names) <> " Names" > > > -- > Best regards, > Sam > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Fri Aug 2 20:06:42 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Fri, 02 Aug 2019 21:06:42 +0100 Subject: api to access .hi files In-Reply-To: References: <8736irnpsw.fsf@gmail.com> <87sgqj4afy.fsf@gmail.com> Message-ID: <87mugr9vsd.fsf@gmail.com> Brandon Allbery writes: > At a guess, because the ghc package defaults to being hidden (it's creating > a new ghc instance at runtime, so the visibility of the ghc package when > compiling your code is not relevant) you need to do the ghc-api equivalent > of "-package ghc". Or for testing just "ghc-pkg expose ghc". Hmm, would that also explain why the Prelude and Control.Monad modules are not shown either? Is there a way to expose all modules programmatically? > > On Fri, Aug 2, 2019 at 3:47 PM Sam Halliday wrote: > >> To answer my own question with a solution and another question: >> >> Sam Halliday writes: >> > I'm mostly interested in gathering information about symbols and their >> > type signatures. As a first exercise: given a module+import section >> > for a haskell source file, I want to find out which symbols (and their >> > types) are available. Like :browse in ghci, but programmatically. >> >> This is answered by Stephen Diehl's blog post on the ghc api! How lucky >> I am: http://www.stephendiehl.com/posts/ghc_01.html >> >> He points to getNamesInScope >> >> Unfortunately I'm getting zero Names back when loading a file that >> imports several modules from ghc. Is there something I'm missing in the >> following? >> >> module Main where >> >> import Control.Monad >> import Control.Monad.IO.Class >> import GHC >> import GHC.Paths (libdir) >> >> main = runGhc (Just libdir) $ do >> dflags <- getSessionDynFlags >> void $ setSessionDynFlags $ dflags { >> hscTarget = HscInterpreted >> , ghcLink = LinkInMemory >> } >> addTarget $ Target (TargetFile "exe/Main.hs" Nothing) False Nothing >> res <- load LoadAllTargets >> liftIO $ putStrLn $ showPpr dflags res >> names <- getNamesInScope >> liftIO $ putStrLn $ "seen " <> (show $ length names) <> " Names" >> >> >> -- >> Best regards, >> Sam >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From allbery.b at gmail.com Fri Aug 2 20:12:37 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 2 Aug 2019 16:12:37 -0400 Subject: api to access .hi files In-Reply-To: <87mugr9vsd.fsf@gmail.com> References: <8736irnpsw.fsf@gmail.com> <87sgqj4afy.fsf@gmail.com> <87mugr9vsd.fsf@gmail.com> Message-ID: No, those are in base. But I don't think you would be seeing imported names as such there, come to think of it, only names declared locally. On Fri, Aug 2, 2019 at 4:06 PM Sam Halliday wrote: > Brandon Allbery writes: > > > At a guess, because the ghc package defaults to being hidden (it's > creating > > a new ghc instance at runtime, so the visibility of the ghc package when > > compiling your code is not relevant) you need to do the ghc-api > equivalent > > of "-package ghc". Or for testing just "ghc-pkg expose ghc". > > Hmm, would that also explain why the Prelude and Control.Monad modules > are not shown either? > > Is there a way to expose all modules programmatically? > > > > > > On Fri, Aug 2, 2019 at 3:47 PM Sam Halliday > wrote: > > > >> To answer my own question with a solution and another question: > >> > >> Sam Halliday writes: > >> > I'm mostly interested in gathering information about symbols and their > >> > type signatures. As a first exercise: given a module+import section > >> > for a haskell source file, I want to find out which symbols (and their > >> > types) are available. Like :browse in ghci, but programmatically. > >> > >> This is answered by Stephen Diehl's blog post on the ghc api! How lucky > >> I am: http://www.stephendiehl.com/posts/ghc_01.html > >> > >> He points to getNamesInScope > >> > >> Unfortunately I'm getting zero Names back when loading a file that > >> imports several modules from ghc. Is there something I'm missing in the > >> following? > >> > >> module Main where > >> > >> import Control.Monad > >> import Control.Monad.IO.Class > >> import GHC > >> import GHC.Paths (libdir) > >> > >> main = runGhc (Just libdir) $ do > >> dflags <- getSessionDynFlags > >> void $ setSessionDynFlags $ dflags { > >> hscTarget = HscInterpreted > >> , ghcLink = LinkInMemory > >> } > >> addTarget $ Target (TargetFile "exe/Main.hs" Nothing) False Nothing > >> res <- load LoadAllTargets > >> liftIO $ putStrLn $ showPpr dflags res > >> names <- getNamesInScope > >> liftIO $ putStrLn $ "seen " <> (show $ length names) <> " Names" > >> > >> > >> -- > >> Best regards, > >> Sam > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > > > > > -- > > brandon s allbery kf8nh > > allbery.b at gmail.com > > -- > Best regards, > Sam > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Fri Aug 2 21:30:58 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Fri, 02 Aug 2019 22:30:58 +0100 Subject: api to access .hi files In-Reply-To: References: <8736irnpsw.fsf@gmail.com> <87sgqj4afy.fsf@gmail.com> <87mugr9vsd.fsf@gmail.com> Message-ID: <878ssbqmp9.fsf@gmail.com> Brandon Allbery writes: > No, those are in base. But I don't think you would be seeing imported names > as such there, come to think of it, only names declared locally. Hmm, then perhaps I misunderstand what it's doing. If I do what I thought might be the equivalent ghci command λ> :l exe/Main.hs [1 of 1] Compiling Main ( exe/Main.hs, interpreted ) Ok, one module loaded. λ> :browse main :: IO () we see one symbol. So this is already different to what my application is doing. But the information I want is when we do something like λ> :browse! *Main ... everything in scope including Prelude and GHC ... An option I have considered would be to manually parse the import sections and then perform the Module lookup via the pkg database, but that approach has many flaws because it means reimplementing a lot of the early compilation stages manually and I'm sure dealing with explicit import lists (and hiding, not to mention dealing with lang extensions such as TypeOperators) is probably quite tricky to get right. > > On Fri, Aug 2, 2019 at 4:06 PM Sam Halliday wrote: > >> Brandon Allbery writes: >> >> > At a guess, because the ghc package defaults to being hidden (it's >> creating >> > a new ghc instance at runtime, so the visibility of the ghc package when >> > compiling your code is not relevant) you need to do the ghc-api >> equivalent >> > of "-package ghc". Or for testing just "ghc-pkg expose ghc". >> >> Hmm, would that also explain why the Prelude and Control.Monad modules >> are not shown either? >> >> Is there a way to expose all modules programmatically? >> >> >> > >> > On Fri, Aug 2, 2019 at 3:47 PM Sam Halliday >> wrote: >> > >> >> To answer my own question with a solution and another question: >> >> >> >> Sam Halliday writes: >> >> > I'm mostly interested in gathering information about symbols and their >> >> > type signatures. As a first exercise: given a module+import section >> >> > for a haskell source file, I want to find out which symbols (and their >> >> > types) are available. Like :browse in ghci, but programmatically. >> >> >> >> This is answered by Stephen Diehl's blog post on the ghc api! How lucky >> >> I am: http://www.stephendiehl.com/posts/ghc_01.html >> >> >> >> He points to getNamesInScope >> >> >> >> Unfortunately I'm getting zero Names back when loading a file that >> >> imports several modules from ghc. Is there something I'm missing in the >> >> following? >> >> >> >> module Main where >> >> >> >> import Control.Monad >> >> import Control.Monad.IO.Class >> >> import GHC >> >> import GHC.Paths (libdir) >> >> >> >> main = runGhc (Just libdir) $ do >> >> dflags <- getSessionDynFlags >> >> void $ setSessionDynFlags $ dflags { >> >> hscTarget = HscInterpreted >> >> , ghcLink = LinkInMemory >> >> } >> >> addTarget $ Target (TargetFile "exe/Main.hs" Nothing) False Nothing >> >> res <- load LoadAllTargets >> >> liftIO $ putStrLn $ showPpr dflags res >> >> names <- getNamesInScope >> >> liftIO $ putStrLn $ "seen " <> (show $ length names) <> " Names" >> >> >> >> >> >> -- >> >> Best regards, >> >> Sam >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> > >> > >> > -- >> > brandon s allbery kf8nh >> > allbery.b at gmail.com >> >> -- >> Best regards, >> Sam >> > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From a.pelenitsyn at gmail.com Fri Aug 2 22:21:50 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Fri, 2 Aug 2019 18:21:50 -0400 Subject: api to access .hi files In-Reply-To: <87sgqj4afy.fsf@gmail.com> References: <8736irnpsw.fsf@gmail.com> <87sgqj4afy.fsf@gmail.com> Message-ID: Hey Sam, Starting from the implementation of :browse and going through the call graph in: https://gitlab.haskell.org/ghc/ghc/blob/master/ghc/GHCi/UI.hs gave the following, which works for me: module Main where import Control.Monad import Control.Monad.IO.Class import BasicTypes import DynFlags import GHC import GHC.Paths (libdir) import Maybes import Panic main = runGhc (Just libdir) $ do dflags <- getSessionDynFlags void $ setSessionDynFlags $ dflags { hscTarget = HscInterpreted , ghcLink = LinkInMemory } t <- guessTarget "Main.hs" Nothing setTargets [t] _ <- load LoadAllTargets graph <- getModuleGraph mss <- filterM (isLoaded . ms_mod_name) (mgModSummaries graph) let m = ms_mod ms ms = head mss liftIO . putStrLn $ (show . length $ mss) ++ " modules loaded" mi <- getModuleInfo m let mod_info = fromJust mi dflags <- getDynFlags let names = GHC.modInfoTopLevelScope mod_info `orElse` [] liftIO $ putStrLn $ "seen " <> (show $ length names) <> " Names" -- Best, Artem On Fri, 2 Aug 2019 at 15:47, Sam Halliday wrote: > To answer my own question with a solution and another question: > > Sam Halliday writes: > > I'm mostly interested in gathering information about symbols and their > > type signatures. As a first exercise: given a module+import section > > for a haskell source file, I want to find out which symbols (and their > > types) are available. Like :browse in ghci, but programmatically. > > This is answered by Stephen Diehl's blog post on the ghc api! How lucky > I am: http://www.stephendiehl.com/posts/ghc_01.html > > He points to getNamesInScope > > Unfortunately I'm getting zero Names back when loading a file that > imports several modules from ghc. Is there something I'm missing in the > following? > > module Main where > > import Control.Monad > import Control.Monad.IO.Class > import GHC > import GHC.Paths (libdir) > > main = runGhc (Just libdir) $ do > dflags <- getSessionDynFlags > void $ setSessionDynFlags $ dflags { > hscTarget = HscInterpreted > , ghcLink = LinkInMemory > } > addTarget $ Target (TargetFile "exe/Main.hs" Nothing) False Nothing > res <- load LoadAllTargets > liftIO $ putStrLn $ showPpr dflags res > names <- getNamesInScope > liftIO $ putStrLn $ "seen " <> (show $ length names) <> " Names" > > > -- > Best regards, > Sam > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joan.karadimov at gmail.com Sun Aug 4 12:36:27 2019 From: joan.karadimov at gmail.com (Joan Karadimov) Date: Sun, 4 Aug 2019 15:36:27 +0300 Subject: Invalid link in the wiki page "Building GHC on Windows" In-Reply-To: References: Message-ID: Thanks! > Also, wiki is now back to public access for editing (it was closed for technical reasons last several days). Oh, I didn't realise it was temporary. > So you can fix if anything pops up in the future. Will do! On Wed, Jul 31, 2019 at 8:05 PM Artem Pelenitsyn wrote: > Hey Joan, > > Thanks for spotting this! Should be fixed now. > Also, wiki is now back to public access for editing (it was closed for > technical reasons last several days). So you can fix if anything pops up in > the future. > > -- > Best wishes, Artem > > On Mon, 29 Jul 2019 at 08:36, Joan Karadimov > wrote: > >> Inside this wiki page: >> https://gitlab.haskell.org/ghc/ghc/wikis/building/preparation/windows >> >> ... there is a link to the latest cabal release. The link is: >> >> https://www.haskell.org/cabal/release/cabal-install-2.4.1.0/cabal-install-2.4.1.0-${arch}-unknown-mingw32.zip >> >> >> That link is not valid. It should be something like: >> >> https://downloads.haskell.org/cabal/cabal-install-2.4.1.0/cabal-install-2.4.1.0-${arch}-unknown-mingw32.zip >> >> The latter link is taken from https://www.haskell.org/cabal/download.html >> . >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Aug 4 16:01:23 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 04 Aug 2019 12:01:23 -0400 Subject: a better workflow? In-Reply-To: References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> <87v9vsmebc.fsf@smart-cactus.org> <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> <20190724024829.GA32104@darkboxed.org> Message-ID: <874l2wly29.fsf@smart-cactus.org> Richard Eisenberg writes: >> On Jul 23, 2019, at 10:48 PM, Daniel Gröber wrote: >> >> I don't think you ever mentioned -- are you already using `git >> worktree` to get multiple source checkouts or are you working off a >> single build tree? I find using it essential to reducing context >> switching overhead. > > This is a good point. No, I'm not currently. Some post I read > (actually, I think the manpage) said that `git worktree` and > submodules don't mix, so I got scared off. Regardless, I don't think > worktree will solve my problem exactly. It eliminates the annoyance of > shuttling commits from one checkout to another, but that's not really > a pain point for me. (Yes, it's a small annoyance, but I hit it only > rarely, and it's quick to sort out.) Perhaps I'm missing something > though about worktree that will allow more, e.g., sharing of build > products. Am I? > Sadly no. Recently we (specifically David Eichmann) invested quite some effort in trying to enable Shake's support for caching in Hadrian which would have allowed sharing of build artifacts between trees. Unfortunately the challenges here were significantly greater than we expected. David summarized the effort in his recent blog post [1]. Cheers, - Ben [1] https://www.haskell.org/ghc/blog/20190731-hadrian-cloud-builds.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From sandy at sandymaguire.me Sun Aug 4 17:06:14 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Sun, 4 Aug 2019 13:06:14 -0400 Subject: Solving stuck type families with a TC plugin Message-ID: Hi all, I'm attempting to use a plugin to solve a generic type family CmpType (a :: k) (b :: k) :: Ordering by doing some sort of arbitrary hashing on `a` and `b` and ensuring they're the same. In the past, I've been successful at getting GHC to unify things by emitting new wanted CNonCanonical cts. This sort of works: mkWanted :: TyCon -> CompareType -> TcPluginM Ct mkWanted cmpType cmp = do (ev, _) <- unsafeTcPluginTcM . runTcSDeriveds $ newWantedEq (cmpTypeLoc cmp) Nominal (cmpTypeType cmp) (doCompare cmp) pure $ CNonCanonical ev Which is to say that this will compile: foo :: Proxy 'EQ foo = Proxy @(CmpType 2 2) So far so good! However, this acts strangely. For example, if I ask for bar with the incorrect type: bar :: Proxy 'GT bar = Proxy @(CmpType 2 2) I get the error: • Couldn't match type ‘CmpType 2 2’ with ‘'GT’ Expected type: Proxy 'GT Actual type: Proxy (CmpType 2 2) when I would expect • Couldn't match type ‘'EQ’ with ‘'GT’ This is more than just an issue with the error messages. A type family that is stuck on the result of CmpType, even after I've solved CmpType via the above! type family IsEQ (a :: Ordering) :: Bool where IsEQ 'EQ = 'True IsEQ _ = 'False zop :: Proxy 'True zop = Proxy @(IsEQ (CmpType 2 2)) • Couldn't match type ‘IsEQ (CmpType 2 2)’ with ‘'True’ Expected type: Proxy 'True Actual type: Proxy (IsEQ (CmpType 2 2)) Any suggestions for what I might be doing wrong, and how to convince GHC to make `zop` work properly? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Aug 5 14:35:57 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 5 Aug 2019 10:35:57 -0400 Subject: Solving stuck type families with a TC plugin In-Reply-To: References: Message-ID: Hi Sandy, I think the problem is that you're generating *Wanted* constraints. A Wanted is something that has not yet been proven, but which you would like to prove. If you have a metavariable a0, then created a Wanted `a0 ~ Bool` will work: you want to prove that, so GHC just unifies a0 := Bool. But anything more complicated than a unification variable will run into trouble, as GHC won't know how to prove it. Instead, create Givens. With these, you are providing the evidence to GHC that something holds -- exactly what you want here. Also, there shouldn't be a need to use unsafeTcPluginTcM or runTcSDeriveds here: just use newGiven (or newWanted) from the TcPluginM module, and return these constraints (perhaps wrapped in mkNonCanonical) from your plugin function. I hope this helps! Richard > On Aug 4, 2019, at 1:06 PM, Sandy Maguire wrote: > > Hi all, > > I'm attempting to use a plugin to solve a generic > > type family CmpType (a :: k) (b :: k) :: Ordering > > by doing some sort of arbitrary hashing on `a` and `b` and ensuring they're the same. > > In the past, I've been successful at getting GHC to unify things by emitting new wanted CNonCanonical cts. This sort of works: > > > mkWanted :: TyCon -> CompareType -> TcPluginM Ct > mkWanted cmpType cmp = do > (ev, _) <- unsafeTcPluginTcM > . runTcSDeriveds > $ newWantedEq > (cmpTypeLoc cmp) > Nominal > (cmpTypeType cmp) > (doCompare cmp) > pure $ CNonCanonical ev > > > Which is to say that this will compile: > > > foo :: Proxy 'EQ > foo = Proxy @(CmpType 2 2) > > > So far so good! However, this acts strangely. For example, if I ask for bar with the incorrect type: > > > bar :: Proxy 'GT > bar = Proxy @(CmpType 2 2) > > > I get the error: > > • Couldn't match type ‘CmpType 2 2’ with ‘'GT’ > Expected type: Proxy 'GT > Actual type: Proxy (CmpType 2 2) > > when I would expect > > • Couldn't match type ‘'EQ’ with ‘'GT’ > > > This is more than just an issue with the error messages. A type family that is stuck on the result of CmpType, even after I've solved CmpType via the above! > > > type family IsEQ (a :: Ordering) :: Bool where > IsEQ 'EQ = 'True > IsEQ _ = 'False > > zop :: Proxy 'True > zop = Proxy @(IsEQ (CmpType 2 2)) > > > • Couldn't match type ‘IsEQ (CmpType 2 2)’ with ‘'True’ > Expected type: Proxy 'True > Actual type: Proxy (IsEQ (CmpType 2 2)) > > > Any suggestions for what I might be doing wrong, and how to convince GHC to make `zop` work properly? Thanks! > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From sandy at sandymaguire.me Mon Aug 5 15:17:43 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Mon, 5 Aug 2019 11:17:43 -0400 Subject: Solving stuck type families with a TC plugin In-Reply-To: References: Message-ID: Thanks Richard! Matt pointed me in the same direction, and generating givens seems to work. Planning on releasing this solving tyfams stuff as a small library soon. Cheers! On Mon, Aug 5, 2019 at 10:36 AM Richard Eisenberg wrote: > Hi Sandy, > > I think the problem is that you're generating *Wanted* constraints. A > Wanted is something that has not yet been proven, but which you would like > to prove. If you have a metavariable a0, then created a Wanted `a0 ~ Bool` > will work: you want to prove that, so GHC just unifies a0 := Bool. But > anything more complicated than a unification variable will run into > trouble, as GHC won't know how to prove it. > > Instead, create Givens. With these, you are providing the evidence to GHC > that something holds -- exactly what you want here. Also, there shouldn't > be a need to use unsafeTcPluginTcM or runTcSDeriveds here: just use > newGiven (or newWanted) from the TcPluginM module, and return these > constraints (perhaps wrapped in mkNonCanonical) from your plugin function. > > I hope this helps! > Richard > > > On Aug 4, 2019, at 1:06 PM, Sandy Maguire wrote: > > > > Hi all, > > > > I'm attempting to use a plugin to solve a generic > > > > type family CmpType (a :: k) (b :: k) :: Ordering > > > > by doing some sort of arbitrary hashing on `a` and `b` and ensuring > they're the same. > > > > In the past, I've been successful at getting GHC to unify things by > emitting new wanted CNonCanonical cts. This sort of works: > > > > > > mkWanted :: TyCon -> CompareType -> TcPluginM Ct > > mkWanted cmpType cmp = do > > (ev, _) <- unsafeTcPluginTcM > > . runTcSDeriveds > > $ newWantedEq > > (cmpTypeLoc cmp) > > Nominal > > (cmpTypeType cmp) > > (doCompare cmp) > > pure $ CNonCanonical ev > > > > > > Which is to say that this will compile: > > > > > > foo :: Proxy 'EQ > > foo = Proxy @(CmpType 2 2) > > > > > > So far so good! However, this acts strangely. For example, if I ask for > bar with the incorrect type: > > > > > > bar :: Proxy 'GT > > bar = Proxy @(CmpType 2 2) > > > > > > I get the error: > > > > • Couldn't match type ‘CmpType 2 2’ with ‘'GT’ > > Expected type: Proxy 'GT > > Actual type: Proxy (CmpType 2 2) > > > > when I would expect > > > > • Couldn't match type ‘'EQ’ with ‘'GT’ > > > > > > This is more than just an issue with the error messages. A type family > that is stuck on the result of CmpType, even after I've solved CmpType via > the above! > > > > > > type family IsEQ (a :: Ordering) :: Bool where > > IsEQ 'EQ = 'True > > IsEQ _ = 'False > > > > zop :: Proxy 'True > > zop = Proxy @(IsEQ (CmpType 2 2)) > > > > > > • Couldn't match type ‘IsEQ (CmpType 2 2)’ with ‘'True’ > > Expected type: Proxy 'True > > Actual type: Proxy (IsEQ (CmpType 2 2)) > > > > > > Any suggestions for what I might be doing wrong, and how to convince GHC > to make `zop` work properly? Thanks! > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Mon Aug 5 17:21:53 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Mon, 05 Aug 2019 18:21:53 +0100 Subject: ModuleInfo.minf_rdr_env not exposed Message-ID: <87tvav34um.fsf@gmail.com> Hi all, Is there a reason why minf_rdr_env (a field in ModuleInfo) is not exposed? It's possible to reconstruct it in 8.4.4 (the only version I'm looking at) with a TypecheckedModule via let (tc_gbl_env, _) = GHC.tm_internals_ tmod minf_rdr_env = tcg_rdr_env tc_gbl_env It's a useful thing to have for editor tooling (e.g. to get the correct qualified imported symbols that may be autocompleted). -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From ben at smart-cactus.org Tue Aug 6 16:40:59 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 06 Aug 2019 12:40:59 -0400 Subject: Conveniently searching GitLab Message-ID: <87k1bqjlgx.fsf@smart-cactus.org> Hello everyone, I've long found GitLab's search interface to be rather clunky. Several of the issues I've noted have been reported upstream but progress in resolving them seems to be slow. Consequently, I have implemented a workaround [1] which addresses most of my typical use-cases. This is essentially a thin wrapper around GitLab's various search facilities which intelligently routes requests according to a simple (and hopefully intuitive) query syntax. For instance, to * navigate to GHC issue #123: search for `#123` * search for GHC issues related to unboxed tuples: search for `# unboxed tuples` * navigate to the `ghc/head.hackage` project: search for `ghc/head.hackage>` * navigate to merge request !3 of the `ghc/head.hackage` project: search for `ghc/head.hackage!3` * navigate to commit e130fb57f7991575d848612abafe9ad10129131c of the `ghc/ghc` project: search for `ghc/ghc at e130fb57f7991575d848612abafe9ad10129131c` * search for merge requests of `haskell/ghcup` pertaining to Darwin: search for `haskell/ghcup! Darwin` Note that, as seen in the first two examples, the project name defaults to `ghc/ghc` if omitted. This interface is best used via your browser's search keywords feature [2]. To add a search keyword in Firefox you can right click on the search query input field and select "Add search keyword". To my surprise I didn't find a similarly convenient shortcut in Chromium. If you can think of any way in which the service can be improved, feel free to open a pull request against the upstream repository [3]. Cheers, - Ben [1] https://search.gitlab.haskell.org/ [2] http://kb.mozillazine.org/Using_keyword_searches [3] https://gitlab.haskell.org/bgamari/gitlab-search-service -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Aug 6 16:50:16 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 06 Aug 2019 12:50:16 -0400 Subject: Conveniently searching GitLab In-Reply-To: <87k1bqjlgx.fsf@smart-cactus.org> References: <87k1bqjlgx.fsf@smart-cactus.org> Message-ID: <87h86ujl16.fsf@smart-cactus.org> Ben Gamari writes: > Hello everyone, > ... > * search for merge requests of `haskell/ghcup` pertaining to Darwin: > search for `haskell/ghcup! Darwin` > Unfortunately this example doesn't quite work as I haven't implemented the necessary logic to lookup project IDs from project names. I likely won't get to it in the near future. Perhaps this would be a nice project for someone with a few free minutes. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From allbery.b at gmail.com Tue Aug 6 16:50:31 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 6 Aug 2019 12:50:31 -0400 Subject: Conveniently searching GitLab In-Reply-To: <87k1bqjlgx.fsf@smart-cactus.org> References: <87k1bqjlgx.fsf@smart-cactus.org> Message-ID: You can define them in the settings; what's odd about them is that it's part of "Manage search engines". Which is at the bottom of the right click menu in the location bar (with "edit" instead of "manage", so it should be just as easy to do. On Tue, Aug 6, 2019 at 12:41 PM Ben Gamari wrote: > Hello everyone, > > I've long found GitLab's search interface to be rather clunky. Several > of the issues I've noted have been reported upstream but progress in > resolving them seems to be slow. > > Consequently, I have implemented a workaround [1] which addresses most of > my typical use-cases. This is essentially a thin wrapper around GitLab's > various search facilities which intelligently routes requests according > to a simple (and hopefully intuitive) query syntax. For instance, to > > * navigate to GHC issue #123: > search for `#123` > > * search for GHC issues related to unboxed tuples: > search for `# unboxed tuples` > > * navigate to the `ghc/head.hackage` project: > search for `ghc/head.hackage>` > > * navigate to merge request !3 of the `ghc/head.hackage` project: > search for `ghc/head.hackage!3` > > * navigate to commit e130fb57f7991575d848612abafe9ad10129131c of the > `ghc/ghc` project: > search for `ghc/ghc at e130fb57f7991575d848612abafe9ad10129131c` > > * search for merge requests of `haskell/ghcup` pertaining to Darwin: > search for `haskell/ghcup! Darwin` > > Note that, as seen in the first two examples, the project name defaults > to `ghc/ghc` if omitted. > > This interface is best used via your browser's search keywords feature > [2]. To add a search keyword in Firefox you can right click on the > search query input field and select "Add search keyword". To my > surprise I didn't find a similarly convenient shortcut in Chromium. > > If you can think of any way in which the service can be improved, feel > free to open a pull request against the upstream repository [3]. > > Cheers, > > - Ben > > > [1] https://search.gitlab.haskell.org/ > [2] http://kb.mozillazine.org/Using_keyword_searches > [3] https://gitlab.haskell.org/bgamari/gitlab-search-service > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Tue Aug 6 19:52:37 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Tue, 06 Aug 2019 20:52:37 +0100 Subject: ModuleInfo.minf_rdr_env not exposed In-Reply-To: <87tvav34um.fsf@gmail.com> References: <87tvav34um.fsf@gmail.com> Message-ID: <8736iejcl6.fsf@gmail.com> Hello, I would like to submit a patch to ghc 8.8 adding a function that exposes a field that is useful for tooling authors and can already be reached, albeit in a very awkward way. The process documented at https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/adding-features seems very heavyweight for the patch that I was planning to propose, which is just adding the following to GHC.hs (and exporting it): modInfoRdrEnv :: ModuleInfo -> Maybe GlobalRdrEnv modInfoRdrEnv = minf_rdr_env Without this accessor, we must reparse and typecheck the file to get the GlobalRdrElts, which is very inefficient and uses fields named "internal" so I'm guessing that's not a good thing to build a tool on top of. My colleague already wrote a tool using a workaround, see https://gitlab.com/tseenshe/hsinspect/blob/503cd48faba5b308be29ed44a12f7d6b22105f2b/exe/Main.hs#L53-61 How should I go about submitting this? Do I need to write a test or is this trivial enough not to bother? Sam Halliday writes: > Hi all, > > Is there a reason why minf_rdr_env (a field in ModuleInfo) is not > exposed? > > It's possible to reconstruct it in 8.4.4 (the only version I'm looking > at) with a TypecheckedModule via > > let (tc_gbl_env, _) = GHC.tm_internals_ tmod > minf_rdr_env = tcg_rdr_env tc_gbl_env > > > It's a useful thing to have for editor tooling (e.g. to get the correct > qualified imported symbols that may be autocompleted). > > -- > Best regards, > Sam -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From a.pelenitsyn at gmail.com Tue Aug 6 20:20:22 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Tue, 6 Aug 2019 16:20:22 -0400 Subject: ModuleInfo.minf_rdr_env not exposed In-Reply-To: <8736iejcl6.fsf@gmail.com> References: <87tvav34um.fsf@gmail.com> <8736iejcl6.fsf@gmail.com> Message-ID: Hey Sam, I think the thing you propose hardly qualifies as a new feature (in the sense of page you referenced), so not much of a hassle should be involved. As long as it is a 3-lines change, I'd say go ahead and create an Issue and an MR on Gitlab: you might be better off with getting feedback there. --Best, Artem On Tue, 6 Aug 2019 at 15:52, Sam Halliday wrote: > Hello, > > I would like to submit a patch to ghc 8.8 adding a function that exposes a > field that is useful for tooling authors and can already be reached, albeit > in a very awkward way. > > The process documented at > > https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/adding-features > seems very heavyweight for the patch that I was planning to propose, > which is just adding the following to GHC.hs (and exporting it): > > modInfoRdrEnv :: ModuleInfo -> Maybe GlobalRdrEnv > modInfoRdrEnv = minf_rdr_env > > Without this accessor, we must reparse and typecheck the file to get the > GlobalRdrElts, which is very inefficient and uses fields named "internal" > so I'm guessing that's not a good thing to build a tool on top of. > > My colleague already wrote a tool using a workaround, see > https://gitlab.com/tseenshe/hsinspect/blob/503cd48faba5b308be29ed44a12f7d6b22105f2b/exe/Main.hs#L53-61 > > How should I go about submitting this? Do I need to write a test or is > this trivial enough not to bother? > > Sam Halliday writes: > > > Hi all, > > > > Is there a reason why minf_rdr_env (a field in ModuleInfo) is not > > exposed? > > > > It's possible to reconstruct it in 8.4.4 (the only version I'm looking > > at) with a TypecheckedModule via > > > > let (tc_gbl_env, _) = GHC.tm_internals_ tmod > > minf_rdr_env = tcg_rdr_env tc_gbl_env > > > > > > It's a useful thing to have for editor tooling (e.g. to get the correct > > qualified imported symbols that may be autocompleted). > > > > -- > > Best regards, > > Sam > > -- > Best regards, > Sam > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Wed Aug 7 19:30:11 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Wed, 07 Aug 2019 20:30:11 +0100 Subject: ModuleInfo.minf_rdr_env not exposed In-Reply-To: References: <87tvav34um.fsf@gmail.com> <8736iejcl6.fsf@gmail.com> Message-ID: <87a7cklqnw.fsf@gmail.com> Thanks Artem, I created a merge request at https://gitlab.haskell.org/ghc/ghc/merge_requests/1541 I wanted to target the 8.8 branch but I couldn't find anything except master and that seems to be 8.9 already. If the reviewers are happy with this, could they please cherry pick it to wherever it needs to go? I'd also love to see it in 8.6 if there are any more releases as it would mean no more workarounds in the downstream tool :-D Artem Pelenitsyn writes: > Hey Sam, > > I think the thing you propose hardly qualifies as a new feature (in the > sense of page you referenced), so not much of a hassle should be involved. > As long as it is a 3-lines change, I'd say go ahead and create an Issue and > an MR on Gitlab: you might be better off with getting feedback there. > > --Best, Artem > > On Tue, 6 Aug 2019 at 15:52, Sam Halliday wrote: > >> Hello, >> >> I would like to submit a patch to ghc 8.8 adding a function that exposes a >> field that is useful for tooling authors and can already be reached, albeit >> in a very awkward way. >> >> The process documented at >> >> https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/adding-features >> seems very heavyweight for the patch that I was planning to propose, >> which is just adding the following to GHC.hs (and exporting it): >> >> modInfoRdrEnv :: ModuleInfo -> Maybe GlobalRdrEnv >> modInfoRdrEnv = minf_rdr_env >> >> Without this accessor, we must reparse and typecheck the file to get the >> GlobalRdrElts, which is very inefficient and uses fields named "internal" >> so I'm guessing that's not a good thing to build a tool on top of. >> >> My colleague already wrote a tool using a workaround, see >> https://gitlab.com/tseenshe/hsinspect/blob/503cd48faba5b308be29ed44a12f7d6b22105f2b/exe/Main.hs#L53-61 >> >> How should I go about submitting this? Do I need to write a test or is >> this trivial enough not to bother? >> >> Sam Halliday writes: >> >> > Hi all, >> > >> > Is there a reason why minf_rdr_env (a field in ModuleInfo) is not >> > exposed? >> > >> > It's possible to reconstruct it in 8.4.4 (the only version I'm looking >> > at) with a TypecheckedModule via >> > >> > let (tc_gbl_env, _) = GHC.tm_internals_ tmod >> > minf_rdr_env = tcg_rdr_env tc_gbl_env >> > >> > >> > It's a useful thing to have for editor tooling (e.g. to get the correct >> > qualified imported symbols that may be autocompleted). >> > >> > -- >> > Best regards, >> > Sam >> >> -- >> Best regards, >> Sam >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From sam.halliday at gmail.com Wed Aug 7 19:39:18 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Wed, 07 Aug 2019 20:39:18 +0100 Subject: LoadAllTargets not loading dependencies Message-ID: <877e7olq8p.fsf@gmail.com> Hello all, I've been following along with Stephen Diehl's blog series on the ghc api and I am using `load LoadAllTargets` on a .hs file (let's call it Wibble.hs) and then typechecking. All seems well until I add a second module (let's call it Wobble.hs) in the same package and import it from Wibble. I can see that the load has correctly calculated the dependency graph and includes Wobble in the dependencies of Wibble. But if I try to typecheck Wibble, then I get an error: Could not find module ‘Wobble’ Use -v to see a list of the files searched for. | 4 | import Wobble | ^^^^^^^^^^^^^^^^^^^^^^^ I have separately compiled the package, and can see its packagedb is listed in the .ghc.env file. What else do I need to do so that Wibble can see Wobble? -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From a.pelenitsyn at gmail.com Wed Aug 7 19:47:37 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Wed, 7 Aug 2019 15:47:37 -0400 Subject: LoadAllTargets not loading dependencies In-Reply-To: <877e7olq8p.fsf@gmail.com> References: <877e7olq8p.fsf@gmail.com> Message-ID: Hello Sam, It'd be easier to help if you posted the complete example somewhere (e.g. Github). -- Best, Artem On Wed, 7 Aug 2019 at 15:39, Sam Halliday wrote: > Hello all, > > I've been following along with Stephen Diehl's blog series on the ghc > api and I am using `load LoadAllTargets` on a .hs file (let's call it > Wibble.hs) and then typechecking. > > All seems well until I add a second module (let's call it Wobble.hs) in > the same package and import it from Wibble. > > I can see that the load has correctly calculated the dependency graph > and includes Wobble in the dependencies of Wibble. But if I try to > typecheck Wibble, then I get an error: > > Could not find module ‘Wobble’ > Use -v to see a list of the files searched for. > | > 4 | import Wobble > | ^^^^^^^^^^^^^^^^^^^^^^^ > > I have separately compiled the package, and can see its packagedb is > listed in the .ghc.env file. What else do I need to do so that Wibble > can see Wobble? > > -- > Best regards, > Sam > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Wed Aug 7 20:31:04 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Wed, 07 Aug 2019 21:31:04 +0100 Subject: LoadAllTargets not loading dependencies In-Reply-To: References: <877e7olq8p.fsf@gmail.com> Message-ID: <874l2slnuf.fsf@gmail.com> Hi Artem, You're quite right, because when I tried to minimise this I discovered that it is actually a cabal env file bug! It seems that cabal v2-exec produces different env files depending on if -w is used or not, and the one I was using is the broken one. Artem Pelenitsyn writes: > Hello Sam, > > It'd be easier to help if you posted the complete example somewhere (e.g. > Github). > > -- > Best, Artem > > On Wed, 7 Aug 2019 at 15:39, Sam Halliday wrote: > >> Hello all, >> >> I've been following along with Stephen Diehl's blog series on the ghc >> api and I am using `load LoadAllTargets` on a .hs file (let's call it >> Wibble.hs) and then typechecking. >> >> All seems well until I add a second module (let's call it Wobble.hs) in >> the same package and import it from Wibble. >> >> I can see that the load has correctly calculated the dependency graph >> and includes Wobble in the dependencies of Wibble. But if I try to >> typecheck Wibble, then I get an error: >> >> Could not find module ‘Wobble’ >> Use -v to see a list of the files searched for. >> | >> 4 | import Wobble >> | ^^^^^^^^^^^^^^^^^^^^^^^ >> >> I have separately compiled the package, and can see its packagedb is >> listed in the .ghc.env file. What else do I need to do so that Wibble >> can see Wobble? >> >> -- >> Best regards, >> Sam >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From omeragacan at gmail.com Sat Aug 10 08:48:56 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sat, 10 Aug 2019 11:48:56 +0300 Subject: Gitlab's disk full again Message-ID: Hi, Just yesterday Gitlab was giving 500 because the disk was full. Ben deleted some files, but in less than 24h it's full again. This started happening regularly, I wonder if we could do something about this. The reason this time seems to be that Gitlab started generating 22G-large backups daily since the 7th. I'm not sure how important those backups are so I'm not deleting them. There's also a large docker-registry directory (101G). I think it might be good to set up some kind of downtime monitoring or maybe something on the Gitlab server to send an email when the disk is nearly full. It could send an email to people who has access to the server. It'd also be good to come up with an action plan when this happens. I have access to the server, but I have no idea which files are important. Documenting Gitlab setup (and the server details) in more details might be helpful. Does anyone have any other ideas to keep the server running? Ömer From a.pelenitsyn at gmail.com Sat Aug 10 22:22:44 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Sat, 10 Aug 2019 18:22:44 -0400 Subject: Gitlab's disk full again In-Reply-To: References: Message-ID: Hello, Is there a reason to keep more than one backup of GitLab ever? -- Best, Artem On Sat, Aug 10, 2019, 4:49 AM Ömer Sinan Ağacan wrote: > Hi, > > Just yesterday Gitlab was giving 500 because the disk was full. Ben > deleted some > files, but in less than 24h it's full again. This started happening > regularly, I > wonder if we could do something about this. > > The reason this time seems to be that Gitlab started generating 22G-large > backups daily since the 7th. I'm not sure how important those backups are > so I'm > not deleting them. > > There's also a large docker-registry directory (101G). > > I think it might be good to set up some kind of downtime monitoring or > maybe > something on the Gitlab server to send an email when the disk is nearly > full. It > could send an email to people who has access to the server. > > It'd also be good to come up with an action plan when this happens. I have > access to the server, but I have no idea which files are important. > Documenting > Gitlab setup (and the server details) in more details might be helpful. > > Does anyone have any other ideas to keep the server running? > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From b at chreekat.net Sun Aug 11 08:50:24 2019 From: b at chreekat.net (Bryan Richter) Date: Sun, 11 Aug 2019 11:50:24 +0300 Subject: Gitlab's disk full again In-Reply-To: References: Message-ID: Hi Artem, I would say it's usual operations practice to keep N>1 backups of a system as assurance against corrupted backups. But maybe they could be stored on another server/service? Other suggestions: Gitlab stores both artifacts and caches for the CI pipelines. By default, archives are stored on the same machine as the GitLab service, creating the risk of resource contention. But there is an option to store them in an object storage service e.g. S3. The same goes for caches, but I think they are stored on the CI runner machine by default (is it separate from the GitLab machine?). Plus, caches are shared across many jobs while artifacts are unique to a job, so there are many less caches than artifacts. Still, it might be valuable to audit the use of both artifacts and caches. On Sun, 11 Aug 2019, 1.23 Artem Pelenitsyn, wrote: > Hello, > > Is there a reason to keep more than one backup of GitLab ever? > > -- > Best, Artem > > On Sat, Aug 10, 2019, 4:49 AM Ömer Sinan Ağacan > wrote: > >> Hi, >> >> Just yesterday Gitlab was giving 500 because the disk was full. Ben >> deleted some >> files, but in less than 24h it's full again. This started happening >> regularly, I >> wonder if we could do something about this. >> >> The reason this time seems to be that Gitlab started generating 22G-large >> backups daily since the 7th. I'm not sure how important those backups are >> so I'm >> not deleting them. >> >> There's also a large docker-registry directory (101G). >> >> I think it might be good to set up some kind of downtime monitoring or >> maybe >> something on the Gitlab server to send an email when the disk is nearly >> full. It >> could send an email to people who has access to the server. >> >> It'd also be good to come up with an action plan when this happens. I have >> access to the server, but I have no idea which files are important. >> Documenting >> Gitlab setup (and the server details) in more details might be helpful. >> >> Does anyone have any other ideas to keep the server running? >> >> Ömer >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Sun Aug 11 11:42:49 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sun, 11 Aug 2019 12:42:49 +0100 Subject: PseudoOps in primops.txt.pp Message-ID: Hey fellow devs, While implementing new PseudoOps, a couple of questions popped up: 1. What are PseudoOps? When do we want to declare one? There doesn't seem to be any documentation around them. I only figured out that I probably want a PseudoOp by comparing to PrimOps I thought would be lowered at a similar stage (i.e. somewhere in Core or STG). 2. Why aren't GHC.Magic.{lazy,noinline,oneShot} PseudoOps? 3. Since we have to set all the IdInfo for `seq` and `noinline` manually, why is this incomplete? I.e., I'd expect a useful strictness signature and arity for both of these. Thanks! Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Sun Aug 11 13:14:05 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sun, 11 Aug 2019 14:14:05 +0100 Subject: PseudoOps in primops.txt.pp In-Reply-To: References: Message-ID: This turned out to be rather lengthy and ambivalent, but my current TLDR; of this is that GHC.Magic Ids could all be PseudoOps, because we don't use their definitions anyway. --- Regarding 2., the answer has been right before my eyes in the form of Note [ghcPrimIds (aka pseudoops)] and Note [magicIds]. The most important difference I guess is that we can give meaningful, yet forgetful definitions for functions in GHC.Magic, whereas we can't for proper pseudoops. Note [ghcPrimIds (aka pseudoops)] also answers 3.: IIUC if PseudoOps aren't free abstractions already (proxy#), we try to inline them immediately in Core. For example `noinline`, which is never inlined, could never be a PseudoOp. As a side note: `noinline` seems to lack proper handling in the demand analyser. For example, `noinline id x` should be detected as strict in `x` by unleashing `id`s strictness signature. Not sure if we currently do that. The "What are PseudoOps" part of 1. is thus mostly resolved: PseudoOps are functions with special semantics that can be lowered or erased in Core or STG, so we will never have to think about generating code for them. We still need to treat them specially, because we have no way to encode their semantics at the source level. Examples (these are all current PseudoOps): - `seq` works on functions, but Haskell's `case` doesn't - `proxy#` is a symbolic inhabitant of `Proxy#`, which will be erased in code generation. I guess with -XUnliftedNewtypes we can finally define `Proxy#` in source Haskell as `newtype Proxy# a = Proxy# (# #)` - `unsafeCoerce#` can only be erased when going to STG, where we don't type check as part of linting. - `coerce` gets translated to casts as part of desugaring. - `nullAddr#` get inlined immediately to corresponding literal in Core. This is so that source syntax doesn't have to introduce a new literal. Similarly, the definitions of GHC.Magic all seem to vanish after CorePrep. In fact, I begin to think that GHC.Magic is just a subset of PseudoOps that have semantics expressible in source Haskell (thus have a meaningful definition). Which somewhat contradicts my observation above that `noinline` couldn't be a PseudoOp: Clearly it could, because it is lowered to id by the time we go to STG. This lowering (even in higher-order situations, which is why we actually don't need the definition) seems to be the whole point about having the compiler be aware of these special identifiers. So, for a concrete question: What are the reasons that we don't make i.e. `lazy` a PseudoOp? Am So., 11. Aug. 2019 um 12:42 Uhr schrieb Sebastian Graf < sgraf1337 at gmail.com>: > Hey fellow devs, > > While implementing new PseudoOps, a couple of questions popped up: > > 1. What are PseudoOps? When do we want to declare one? There doesn't > seem to be any documentation around them. I only figured out that I > probably want a PseudoOp by comparing to PrimOps I thought would be lowered > at a similar stage (i.e. somewhere in Core or STG). > 2. Why aren't GHC.Magic.{lazy,noinline,oneShot} PseudoOps? > 3. Since we have to set all the IdInfo for `seq` and `noinline` > manually, why is this incomplete? I.e., I'd expect a useful strictness > signature and arity for both of these. > > Thanks! > Sebastian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Sun Aug 11 16:22:45 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sun, 11 Aug 2019 19:22:45 +0300 Subject: Gitlab's disk full again In-Reply-To: References: Message-ID: The disk issue is fixed now, but one day later Gitlab is broken again. This time I'm getting random 502s. I tired to check logs, I don't know what the problem is but I saw one error line which may be relevant Aug 11 12:21:09 gitlab.haskell.org unicorn[3605]: /nix/store/h6ppx34ccb3binw7awbphaicv5q938z5-ruby2.5.5-prometheus-client-mmap-0.9.8/lib/ruby/gems/2.5.0/gems/prometheus-client-mmap-0.9.8/lib/prometheus/client/mmaped_dict.rb:42: [BUG] Bus Error at 0x00007fe3cd6f6000 Ömer Ömer Sinan Ağacan , 10 Ağu 2019 Cmt, 11:48 tarihinde şunu yazdı: > > Hi, > > Just yesterday Gitlab was giving 500 because the disk was full. Ben deleted some > files, but in less than 24h it's full again. This started happening regularly, I > wonder if we could do something about this. > > The reason this time seems to be that Gitlab started generating 22G-large > backups daily since the 7th. I'm not sure how important those backups are so I'm > not deleting them. > > There's also a large docker-registry directory (101G). > > I think it might be good to set up some kind of downtime monitoring or maybe > something on the Gitlab server to send an email when the disk is nearly full. It > could send an email to people who has access to the server. > > It'd also be good to come up with an action plan when this happens. I have > access to the server, but I have no idea which files are important. Documenting > Gitlab setup (and the server details) in more details might be helpful. > > Does anyone have any other ideas to keep the server running? > > Ömer From lonetiger at gmail.com Tue Aug 13 03:12:59 2019 From: lonetiger at gmail.com (Phyx) Date: Tue, 13 Aug 2019 04:12:59 +0100 Subject: margebot. Message-ID: Hello, margebot seems down, or did I miss a memo on how to commit now? Cheers, Tamar -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Aug 13 07:29:54 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 13 Aug 2019 09:29:54 +0200 Subject: How to work out why a data constructor is allocated using gdb? Message-ID: Hi, I am trying to work out questions such as * Why are there thousands of Module data constructors allocated when building something with GHC * What is allocating all a lot of strings when building GHC? In order to do this I can use gdb in order to find some of the Module/String closures but then I'm a bit stuck about what to do. gdb displays a list of all the Module closures for example and then you can usually try to find the retainer for Module by using findPtr. If the retainer is a THUNK closure, it would be easy, as THUNK closures have DWARF information which maps straight to a particular line. However if the retainer is just some other data constructor, for example, the Module is stored in a Map, it's data constructors all the way up and none of them have DWARF info. I need to fall back to domain specific knowledge to work out where such a sequence of constructors might appear in my program. * Is there anything better I can do to map a constructor allocation to a more precise source location? The string closures were causing me some particular issues as `findPtr` was not showing any retainers so it's hard to work out why they are not GCd. * What situations can an object be retained but show no retainer when using findPtr? Cheers, Matt From chak at justtesting.org Tue Aug 13 15:42:53 2019 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Tue, 13 Aug 2019 17:42:53 +0200 Subject: ANN: IOHK is looking for Functional Compiler Engineers Message-ID: <07AE86AC-8E7E-4910-9CFF-AA22AF31BAE3@justtesting.org> The IOHK Plutus Team, which works on a Haskell-based platform for contracts on the Cardano blockchain, is looking for new team members: https://iohk.io/careers/#op-341518-functional-compiler-engineer See also Phil Wadler’s blog post: http://wadler.blogspot.com/2019/08/iohk-is-hiring.html A core component of the Plutus Platform is a GHC plugin that translates GHC Core to the on-chain Plutus Core language. A central part of the underlying transformation scheme is described in https://iohk.io/research/library/#unraveling-recursion-compiling-an-ir-with-recursion-to-system-f Manuel From lexi.lambda at gmail.com Sat Aug 17 02:12:53 2019 From: lexi.lambda at gmail.com (Alexis King) Date: Fri, 16 Aug 2019 21:12:53 -0500 Subject: Properly writing typechecker plugins In-Reply-To: References: <55E0D054-1AF7-4611-A6DD-9B84A43FBC51@gmail.com> <1E3BE9D1-3D00-4BEB-810D-6A60D1F1424C@gmail.com> Message-ID: Apologies for the long delay before replying to this, I ended up becoming very busy for a couple weeks. > On Aug 2, 2019, at 02:57, Simon Peyton Jones wrote: > > We should not need to delete solved _givens_ from the inert set. We can augment givens with extra facts, but deleting them seems wrong. I agree with this, so no complaints from me about that. > There should be no Derived constraints in the inert set anyway. They should all be in the WantedConstraints passed to runTcPluginsWanted. They were extracted from the inert set, along with the Deriveds, by getUnsolvedInerts in solve_simple_wanteds Upon looking at the code more carefully, you’re quite right—I was dead wrong, and typechecker plugins can solve derived constraints just fine. I spent several hours debugging the gory details this evening, and after finding the problem, I realized the bug had already been reported as issue #16735 . A little anticlimactic, but yet another reason to figure out how to build GHC HEAD on my machine (and maybe then I can think about submitting some of the documentation changes, too). Thanks again, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Sun Aug 18 00:27:41 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Sat, 17 Aug 2019 17:27:41 -0700 Subject: Getting a hole's relevant local binds? Message-ID: Hi all, I'm trying to get my hands on the relevant local binds (as reported by ghc in the presence of a type hole) for editor tooling support. Tracing the code suggests that these things come from the `TcLclEnv`, but afaict, all remnants of `TcLclEnv` are thrown away by the time we get a `TypecheckedModule`. Am I mistaken in this? If not, how receptive would y'all be to a patch that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. This way editors would have an easy means of getting their hand on whatever is in scope at the site of a hole, without resorting to parsing error messages. Cheers, Sandy -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From trupill at gmail.com Mon Aug 19 06:11:22 2019 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Mon, 19 Aug 2019 08:11:22 +0200 Subject: Missing library in Mac OS X instructions? Message-ID: Hi, I tried to get GHC working from the repo in Mac OS X following the instructions in https://gitlab.haskell.org/ghc/ghc/wikis/building/preparation/mac-osx. At first I got the "error __GNU_MP_VERSION not defined" problem, for which the guide recommends to prepend "CC=clang" to "./configure" , which I did. But even then, I also had to install "gmp" using "brew install gmp". 1. Is it right that I need to install gmp using brew or did I miss some previous step? 2. If (1) is affirmative, how can I help update the docs for building in Mac OS X? Regards, Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.fletcher at daml.com Mon Aug 19 09:08:58 2019 From: shayne.fletcher at daml.com (Shayne Fletcher) Date: Mon, 19 Aug 2019 05:08:58 -0400 Subject: Missing library in Mac OS X instructions? In-Reply-To: References: Message-ID: Hi Alejandro, On Mon, Aug 19, 2019 at 2:11 AM Alejandro Serrano Mena wrote: > Hi, > I tried to get GHC working from the repo in Mac OS X following the > instructions in > https://gitlab.haskell.org/ghc/ghc/wikis/building/preparation/mac-osx. At > first I got the "error __GNU_MP_VERSION not defined" problem, for which > the guide recommends to prepend "CC=clang" to "./configure" , which I did. > But even then, I also had to install "gmp" using "brew install gmp". > 1. Is it right that I need to install gmp using brew or did I miss some > previous step? > 2. If (1) is affirmative, how can I help update the docs for building in > Mac OS X? > > I've encountered this problem. In my case it turned out that I had a rogue `cc` executable in my path. See https://gitlab.haskell.org/ghc/ghc/issues/16904 for details. -- *Shayne Fletcher* Language Engineer */* +1 917 699 7663 *Digital Asset* , creators of *DAML * -- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at  http://www.digitalasset.com/emaildisclaimer.html . If you are not the intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Aug 19 20:16:46 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 19 Aug 2019 20:16:46 +0000 Subject: Haskell implementors workshop Message-ID: Colleagues At the Haskell Implementors Workshop on Friday there is a 23-min slot for an update on GHC status. It's followed by panel discussion. In our annual status reports I have typically summarised what is going on with a few slides. But as GHC grows, and more people contribute, it gets harder to do a good job of making such a summary. So this year I plan to: 1. Summarise some highlights that I know about 2. Invite anyone in the room who would like to briefly describe what they are working on - or have proudly completed in the last year - to grab the microphone and tell everyone about it. 3. Ben will summarise progress on CI, Gitlab, and Hadrian 4. Joachim will say what's happened on the GHC-proposal front To make (2) work * Come to the front so that each handover takes only a few seconds. * Be enthusiastic but brief. Thirty seconds, not two minutes. We only have 25 mins for the whole thing. * If you aren't at ICFP, but would have liked to describe your project to the audience, do send me a couple of sentences that I can use on your behalf. * It's a GHC status session, so anything in the GHC ecosystem is fair game, including different back ends, IDE work, plugins, etc. Tell me in advance if you like, but it's also fine just to be ready on the day. Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Tue Aug 20 08:16:30 2019 From: lexi.lambda at gmail.com (Alexis King) Date: Tue, 20 Aug 2019 03:16:30 -0500 Subject: Typechecker plugins and BuiltInSynFamily Message-ID: Hello all, As I’ve been dabbling with typechecker plugins, I’ve found myself primarily using them to define new “magic” type families, and I don’t think I’m alone—Sandy Maguire recently released the magic-tyfams package for precisely that purpose. However, I can’t help but notice that GHC already has good internal support for such type families via BuiltInSynFamily and CoAxiomRule, which are mostly used to implement operations on Nats. As a plugin author, I would love to be able to use that functionality directly instead of being forced to reimplement it myself, for two big reasons: AxiomRuleCo provides significantly more safety from -dcore-lint than UnivCo, but UnivCo is currently the only way to provide evidence for plugin-solved families. The sfInteractTop and sfInteractInert fields of BuiltInSynFamily make it easy to support improvement for custom type families, which I believe would take a non-trivial amount of tricky code to get right using the current typechecker plugin API. Given the above, I started wondering if it is possible to define a BuiltInSynFamily from inside a plugin or, failing that, to modify GHC to expose that functionality to typechecker plugin authors. I am not familiar with GHC’s internals, but in my brief reading of the source code, the following two things seem like the trickiest obstacles: BuiltInSynFamily TyCons need to be injected into the initial name cache, since otherwise those names will get resolved to their ordinary, non-built-in counterparts (e.g. the ordinary open type families defined in GHC.TypeLits). Since CoAxiomRule values actually have functions inside them, they can’t be serialized into interface files. Therefore, it looks like GHC currently maintains a hardcoded list of all the known CoAxiomRules, and tcIfaceCoAxiomRule just searches for a value in that list using a well-known (i.e. not in any way namespaced!) string. I am not knowledgable enough about GHC to say how hard overcoming either of those issues would be. Point 1 seems possible to achieve by arranging for plugins to export the built-in names they want to define and propagating those to the initial name cache, but I don’t know enough about how plugins are loaded to know if that would create any possible circular dependencies (i.e. does the name cache need to already exist in order to load a plugin in the first place?). Point 2 seems harder. My gut instinct is that it could probably be overcome by somehow storing a reference to the plugin that defined the CoAxiomRule in the interface file (by, for example, storing its package-qualified module name), but I’m not immediately certain when that reference should be resolved to the actual CoAxiomRule value. It also presumably needs to complain if the necessary plugin is not actually loaded when the CoAxiomRule needs to be resolved! I’m willing to try my hand at experimenting with an implementation of this if someone can give me a couple pointers on where to start on the above two issues (assuming people don’t think it’s a bad idea to do it at all). Any advice would be appreciated! Thanks, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From avneesh.chadha at gmail.com Tue Aug 20 08:42:06 2019 From: avneesh.chadha at gmail.com (Avneesh Chadha) Date: Tue, 20 Aug 2019 14:12:06 +0530 Subject: Contributing to GHC : Newbie Message-ID: Hi, I am trying to contribute to GHC and am a complete newbie at this. The new contribution guide pointed me to the issues page(I was able to build the GHC from the code), and I was able to find this feature request. However it does not have any information apart from the feature information. Could someone please add more information to the issue? Warm Regards, Avneesh Chadha Phone:+917838478116 GitHub |LinkedIn |Blog PS: I am sorry for spamming everyone in-case this is not the correct forum , I would appreciate if someone could point to the right one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Aug 20 09:04:08 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 20 Aug 2019 11:04:08 +0200 Subject: Contributing to GHC : Newbie In-Reply-To: References: Message-ID: Hi Avneesh, I have a partially completed patch for this feature which you are welcome to finish. There is some discussion on the ticket about why the patch is broken and some possible ways to fix it. https://gitlab.haskell.org/ghc/ghc/merge_requests/1340 Cheers, Matt On Tue, Aug 20, 2019 at 10:42 AM Avneesh Chadha wrote: > > Hi, > > I am trying to contribute to GHC and am a complete newbie at this. > > The new contribution guide pointed me to the issues page(I was able to build the GHC from the code), and I was able to find this feature request. However it does not have any information apart from the feature information. Could someone please add more information to the issue? > > Warm Regards, > Avneesh Chadha > Phone:+917838478116 > GitHub|LinkedIn|Blog > > > PS: I am sorry for spamming everyone in-case this is not the correct forum , I would appreciate if someone could point to the right one. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From strake888 at gmail.com Tue Aug 20 21:59:49 2019 From: strake888 at gmail.com (Matthew Farkas-Dyck) Date: Tue, 20 Aug 2019 13:59:49 -0800 Subject: `StablePtr` in `ST` Message-ID: I have been doing some work where i want `StablePtr`, but also to not be confined to `IO`. I saw the following comment in "compiler/prelude/PrimOp.hs": Question: Why @RealWorld@ - won't any instance of @_ST@ do the job? [ADR] It has been there for 20 years. What is the answer? If it is safe i'll send the patch generalizing these operations. From david.feuer at gmail.com Wed Aug 21 00:39:10 2019 From: david.feuer at gmail.com (David Feuer) Date: Wed, 21 Aug 2019 07:39:10 +0700 Subject: `StablePtr` in `ST` In-Reply-To: References: Message-ID: So something like newtype StablePtr a = StablePtr (StablePtrST RealWorld a)? I suppose that could work with some discipline. You have to assume that foreign code doesn't pick its address out of a hat and so something silly, but I guess you pretty much have to assume that anyway. On Wed, Aug 21, 2019, 5:00 AM Matthew Farkas-Dyck wrote: > I have been doing some work where i want `StablePtr`, but also to not > be confined to `IO`. I saw the following comment in > "compiler/prelude/PrimOp.hs": > > Question: Why @RealWorld@ - won't any instance of @_ST@ do the job? [ADR] > > It has been there for 20 years. What is the answer? If it is safe i'll > send the patch generalizing these operations. > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Wed Aug 21 01:08:52 2019 From: david.feuer at gmail.com (David Feuer) Date: Wed, 21 Aug 2019 08:08:52 +0700 Subject: `StablePtr` in `ST` In-Reply-To: References: Message-ID: You also need to avoid inspecting the StablePtr itself, which is just a number, to maintain purity. The whole thing is a bit weird. Why do you want this anyway? On Wed, Aug 21, 2019, 7:39 AM David Feuer wrote: > So something like > > newtype StablePtr a = StablePtr (StablePtrST RealWorld a)? > > I suppose that could work with some discipline. You have to assume that > foreign code doesn't pick its address out of a hat and so something silly, > but I guess you pretty much have to assume that anyway. > > On Wed, Aug 21, 2019, 5:00 AM Matthew Farkas-Dyck > wrote: > >> I have been doing some work where i want `StablePtr`, but also to not >> be confined to `IO`. I saw the following comment in >> "compiler/prelude/PrimOp.hs": >> >> Question: Why @RealWorld@ - won't any instance of @_ST@ do the job? [ADR] >> >> It has been there for 20 years. What is the answer? If it is safe i'll >> send the patch generalizing these operations. >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trupill at gmail.com Wed Aug 21 09:19:09 2019 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Wed, 21 Aug 2019 11:19:09 +0200 Subject: Missing library in Mac OS X instructions? In-Reply-To: References: Message-ID: Thanks for the reply! In my case, it was a clean installation of Mac OS X, with only Xcode command line tools and after executing the corresponding 'brew' command from the wiki. El lun., 19 ago. 2019 11:09, Shayne Fletcher escribió: > Hi Alejandro, > > On Mon, Aug 19, 2019 at 2:11 AM Alejandro Serrano Mena > wrote: > >> Hi, >> I tried to get GHC working from the repo in Mac OS X following the >> instructions in >> https://gitlab.haskell.org/ghc/ghc/wikis/building/preparation/mac-osx. >> At first I got the "error __GNU_MP_VERSION not defined" problem, for >> which the guide recommends to prepend "CC=clang" to "./configure" , which >> I did. But even then, I also had to install "gmp" using "brew install gmp". >> 1. Is it right that I need to install gmp using brew or did I miss some >> previous step? >> 2. If (1) is affirmative, how can I help update the docs for building in >> Mac OS X? >> >> > I've encountered this problem. In my case it turned out that I had a rogue > `cc` executable in my path. See > https://gitlab.haskell.org/ghc/ghc/issues/16904 for details. > > -- > *Shayne Fletcher* > Language Engineer */* +1 917 699 7663 > *Digital Asset* , creators of *DAML > * > > This message, and any attachments, is for the intended recipient(s) only, > may contain information that is privileged, confidential and/or proprietary > and subject to important terms and conditions available at > http://www.digitalasset.com/emaildisclaimer.html. If you are not the > intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Aug 21 16:52:17 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 21 Aug 2019 18:52:17 +0200 Subject: Missing library in Mac OS X instructions? In-Reply-To: References: Message-ID: heres what you need to do cp mk/build.mk.sample mk/build.mk then uncomment the line about GMP On Wed, Aug 21, 2019 at 11:19 AM Alejandro Serrano Mena wrote: > Thanks for the reply! > In my case, it was a clean installation of Mac OS X, with only Xcode > command line tools and after executing the corresponding 'brew' command > from the wiki. > > El lun., 19 ago. 2019 11:09, Shayne Fletcher > escribió: > >> Hi Alejandro, >> >> On Mon, Aug 19, 2019 at 2:11 AM Alejandro Serrano Mena >> wrote: >> >>> Hi, >>> I tried to get GHC working from the repo in Mac OS X following the >>> instructions in >>> https://gitlab.haskell.org/ghc/ghc/wikis/building/preparation/mac-osx. >>> At first I got the "error __GNU_MP_VERSION not defined" problem, for >>> which the guide recommends to prepend "CC=clang" to "./configure" , which >>> I did. But even then, I also had to install "gmp" using "brew install gmp". >>> 1. Is it right that I need to install gmp using brew or did I miss some >>> previous step? >>> 2. If (1) is affirmative, how can I help update the docs for building in >>> Mac OS X? >>> >>> >> I've encountered this problem. In my case it turned out that I had a >> rogue `cc` executable in my path. See >> https://gitlab.haskell.org/ghc/ghc/issues/16904 for details. >> >> -- >> *Shayne Fletcher* >> Language Engineer */* +1 917 699 7663 >> *Digital Asset* , creators of *DAML >> * >> >> This message, and any attachments, is for the intended recipient(s) only, >> may contain information that is privileged, confidential and/or proprietary >> and subject to important terms and conditions available at >> http://www.digitalasset.com/emaildisclaimer.html. If you are not the >> intended recipient, please delete this message. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.pelenitsyn at gmail.com Thu Aug 22 12:09:43 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Thu, 22 Aug 2019 08:09:43 -0400 Subject: Stream Simon's report on HIW? Message-ID: Hello devs, Could someone at ICFP maybe stream Simon's report on the progress GHC made last year? I'm pretty sure many would like to check it out. Conference management streamed the main event but not the workshops. And workshops videos are such a pain: HIW 2018 is still not up on the ICFP youtube channel, although HIW 2017 and 2016 are: https://www.youtube.com/channel/UCwRL68qZFfub1Ep1EScfmBw/playlists (And Haskell 2018 playlist there has only 8 videos…) -- Kind regards, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Thu Aug 22 15:23:45 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Thu, 22 Aug 2019 09:23:45 -0600 Subject: Getting a hole's relevant local binds? In-Reply-To: References: Message-ID: Following up on this, I've hacked in the changes locally, by setting `XVar GhcTc = [Name, Type]`, and filling it only for `HsVar`s that used to be `HsUnboundVar`s. The result is remarkable, as it allows for interactive proof search. I've got a proof of concept here: https://asciinema.org/a/FZjEIFzDoHBv741QDHfsU5cn8 I think the possibilities here warrant making the same change in HEAD. I'd be happy to send an MR if it seems likely to be merged. Sandy On Sat, Aug 17, 2019 at 6:27 PM Sandy Maguire wrote: > Hi all, > > I'm trying to get my hands on the relevant local binds (as reported by ghc > in the presence of a type hole) for editor tooling support. Tracing the > code suggests that these things come from the `TcLclEnv`, but afaict, all > remnants of `TcLclEnv` are thrown away by the time we get a > `TypecheckedModule`. > > Am I mistaken in this? If not, how receptive would y'all be to a patch > that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. > This way editors would have an easy means of getting their hand on whatever > is in scope at the site of a hole, without resorting to parsing error > messages. > > Cheers, > Sandy > > -- > I'm currently traveling the world, sleeping on people's couches and doing > full-time collaboration on Haskell projects. If this seems interesting to > you, please consider signing up as a host! > https://isovector.github.io/erdos/ > -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Aug 22 16:39:57 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 22 Aug 2019 18:39:57 +0200 Subject: Getting a hole's relevant local binds? In-Reply-To: References: Message-ID: Are you aware of hole fit plugins Sandy? Do they provide a nice API for you to use? This sounds like a cool and simple change anyway. What happens if you add this additional information using a source plugin or is that too late? Matt On Thu, Aug 22, 2019 at 5:24 PM Sandy Maguire wrote: > > Following up on this, I've hacked in the changes locally, by setting `XVar GhcTc = [Name, Type]`, and filling it only for `HsVar`s that used to be `HsUnboundVar`s. The result is remarkable, as it allows for interactive proof search. I've got a proof of concept here: https://asciinema.org/a/FZjEIFzDoHBv741QDHfsU5cn8 > > I think the possibilities here warrant making the same change in HEAD. I'd be happy to send an MR if it seems likely to be merged. > > Sandy > > > > On Sat, Aug 17, 2019 at 6:27 PM Sandy Maguire wrote: >> >> Hi all, >> >> I'm trying to get my hands on the relevant local binds (as reported by ghc in the presence of a type hole) for editor tooling support. Tracing the code suggests that these things come from the `TcLclEnv`, but afaict, all remnants of `TcLclEnv` are thrown away by the time we get a `TypecheckedModule`. >> >> Am I mistaken in this? If not, how receptive would y'all be to a patch that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. This way editors would have an easy means of getting their hand on whatever is in scope at the site of a hole, without resorting to parsing error messages. >> >> Cheers, >> Sandy >> >> -- >> I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ > > > > -- > I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Thu Aug 22 16:41:55 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 22 Aug 2019 18:41:55 +0200 Subject: Stream Simon's report on HIW? In-Reply-To: References: Message-ID: Simon's talk will be at least streamed locally, so perhaps it's not that difficult to also stream it globally. Matt On Thu, Aug 22, 2019 at 2:10 PM Artem Pelenitsyn wrote: > > Hello devs, > > Could someone at ICFP maybe stream Simon's report on the progress GHC made last year? I'm pretty sure many would like to check it out. > > Conference management streamed the main event but not the workshops. And workshops videos are such a pain: HIW 2018 is still not up on the ICFP youtube channel, although HIW 2017 and 2016 are: > https://www.youtube.com/channel/UCwRL68qZFfub1Ep1EScfmBw/playlists > (And Haskell 2018 playlist there has only 8 videos…) > > -- > Kind regards, > Artem > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From strake888 at gmail.com Fri Aug 23 00:19:42 2019 From: strake888 at gmail.com (Matthew Farkas-Dyck) Date: Thu, 22 Aug 2019 16:19:42 -0800 Subject: `StablePtr` in `ST` In-Reply-To: References: Message-ID: My colleague responded but said it was rejected from ghc-devs. Message follows: On 21/08/2019, Jon Purdy wrote: > Our use case is unsafeCoercing a mutable reference to use as a key in an > IntMap. Our reasoning is that coercing an IORef/STRef is unsuitable because > the underlying MutVar# may move, invalidating the key (i.e., you cannot > safely coerce back if a GC has happened between insertion and reading). (If > that’s incorrect, do enlighten us!) This is a “very nice to have” for our > purposes—with the understanding that it’s wicked unsafe. ;) > > On Tue, Aug 20, 2019, 18:09 David Feuer wrote: > >> You also need to avoid inspecting the StablePtr itself, which is just a >> number, to maintain purity. The whole thing is a bit weird. Why do you >> want >> this anyway? >> >> On Wed, Aug 21, 2019, 7:39 AM David Feuer wrote: >> >>> So something like >>> >>> newtype StablePtr a = StablePtr (StablePtrST RealWorld a)? >>> >>> I suppose that could work with some discipline. You have to assume that >>> foreign code doesn't pick its address out of a hat and so something >>> silly, >>> but I guess you pretty much have to assume that anyway. >>> >>> On Wed, Aug 21, 2019, 5:00 AM Matthew Farkas-Dyck >>> wrote: >>> >>>> I have been doing some work where i want `StablePtr`, but also to not >>>> be confined to `IO`. I saw the following comment in >>>> "compiler/prelude/PrimOp.hs": >>>> >>>> Question: Why @RealWorld@ - won't any instance of @_ST@ do the job? >>>> [ADR] >>>> >>>> It has been there for 20 years. What is the answer? If it is safe i'll >>>> send the patch generalizing these operations. >>>> _______________________________________________ >>>> Glasgow-haskell-users mailing list >>>> Glasgow-haskell-users at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users >>>> >>> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users >> > From jan at vanbruegge.de Fri Aug 23 16:02:59 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Fri, 23 Aug 2019 18:02:59 +0200 Subject: Linker error when adding a new source file Message-ID: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> Hi, in order to clean up my code, I've moved a bunch of stuff to a new source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But when trying to compile a clean build of GHC, I get a linker error: ``` | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1 more) => _build/stage0/bin/ghc _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814): error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578): error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8): error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure' collect2: Fehler: ld gab 1 als Ende-Status zurück `gcc' failed in phase `Linker'. (Exit code: 1) ``` I had a look at the Wiki including the FAQ, but did not fine anything about that topic. Does someone know what I have to do for this to work? Cheers, Jan From sandy at sandymaguire.me Fri Aug 23 16:04:27 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Fri, 23 Aug 2019 10:04:27 -0600 Subject: Linker error when adding a new source file In-Reply-To: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> References: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> Message-ID: Sometimes I see this if I forget to add a file to the `exposed-modules` field of the cabal file. You might be running into that? On Fri, Aug 23, 2019 at 10:03 AM Jan van Brügge wrote: > Hi, > > in order to clean up my code, I've moved a bunch of stuff to a new > source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But > when trying to compile a clean build of GHC, I get a linker error: > > ``` > > | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1 > more) => _build/stage0/bin/ghc > > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814): > error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' > > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578): > error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' > > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8): > error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure' > collect2: Fehler: ld gab 1 als Ende-Status zurück > `gcc' failed in phase `Linker'. (Exit code: 1) > > ``` > > I had a look at the Wiki including the FAQ, but did not fine anything > about that topic. Does someone know what I have to do for this to work? > > Cheers, > Jan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Fri Aug 23 16:07:35 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 23 Aug 2019 12:07:35 -0400 Subject: Linker error when adding a new source file In-Reply-To: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> References: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> Message-ID: >From the looks of it, you're building with a bootstrap compiler (stage 0). Does the build compiler need to have this in its runtime libraries for the built compiler to work? This will require you to work it in in multiple versions, the first providing it without using it and the next using the provided one. On Fri, Aug 23, 2019 at 12:03 PM Jan van Brügge wrote: > Hi, > > in order to clean up my code, I've moved a bunch of stuff to a new > source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But > when trying to compile a clean build of GHC, I get a linker error: > > ``` > > | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1 > more) => _build/stage0/bin/ghc > > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814): > error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' > > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578): > error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' > > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8): > error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure' > collect2: Fehler: ld gab 1 als Ende-Status zurück > `gcc' failed in phase `Linker'. (Exit code: 1) > > ``` > > I had a look at the Wiki including the FAQ, but did not fine anything > about that topic. Does someone know what I have to do for this to work? > > Cheers, > Jan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Fri Aug 23 16:14:25 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Fri, 23 Aug 2019 17:14:25 +0100 Subject: Linker error when adding a new source file In-Reply-To: References: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> Message-ID: I recently experienced this when rebasing. Have you tried a clean build? `rm -rf _build` was enough for me, IIRC. Am Fr., 23. Aug. 2019 um 17:08 Uhr schrieb Brandon Allbery < allbery.b at gmail.com>: > From the looks of it, you're building with a bootstrap compiler (stage 0). > Does the build compiler need to have this in its runtime libraries for the > built compiler to work? This will require you to work it in in multiple > versions, the first providing it without using it and the next using the > provided one. > > On Fri, Aug 23, 2019 at 12:03 PM Jan van Brügge wrote: > >> Hi, >> >> in order to clean up my code, I've moved a bunch of stuff to a new >> source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But >> when trying to compile a clean build of GHC, I get a linker error: >> >> ``` >> >> | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1 >> more) => _build/stage0/bin/ghc >> >> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814): >> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' >> >> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578): >> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' >> >> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8): >> error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure' >> collect2: Fehler: ld gab 1 als Ende-Status zurück >> `gcc' failed in phase `Linker'. (Exit code: 1) >> >> ``` >> >> I had a look at the Wiki including the FAQ, but did not fine anything >> about that topic. Does someone know what I have to do for this to work? >> >> Cheers, >> Jan >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Fri Aug 23 16:14:56 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Fri, 23 Aug 2019 17:14:56 +0100 Subject: Linker error when adding a new source file In-Reply-To: References: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> Message-ID: Ah, you already tried a clean build. Nevermind... Am Fr., 23. Aug. 2019 um 17:14 Uhr schrieb Sebastian Graf < sgraf1337 at gmail.com>: > I recently experienced this when rebasing. Have you tried a clean build? > `rm -rf _build` was enough for me, IIRC. > > Am Fr., 23. Aug. 2019 um 17:08 Uhr schrieb Brandon Allbery < > allbery.b at gmail.com>: > >> From the looks of it, you're building with a bootstrap compiler (stage >> 0). Does the build compiler need to have this in its runtime libraries for >> the built compiler to work? This will require you to work it in in multiple >> versions, the first providing it without using it and the next using the >> provided one. >> >> On Fri, Aug 23, 2019 at 12:03 PM Jan van Brügge >> wrote: >> >>> Hi, >>> >>> in order to clean up my code, I've moved a bunch of stuff to a new >>> source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But >>> when trying to compile a clean build of GHC, I get a linker error: >>> >>> ``` >>> >>> | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1 >>> more) => _build/stage0/bin/ghc >>> >>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814): >>> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' >>> >>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578): >>> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' >>> >>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8): >>> error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure' >>> collect2: Fehler: ld gab 1 als Ende-Status zurück >>> `gcc' failed in phase `Linker'. (Exit code: 1) >>> >>> ``` >>> >>> I had a look at the Wiki including the FAQ, but did not fine anything >>> about that topic. Does someone know what I have to do for this to work? >>> >>> Cheers, >>> Jan >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> >> >> -- >> brandon s allbery kf8nh >> allbery.b at gmail.com >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan at vanbruegge.de Fri Aug 23 16:20:22 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Fri, 23 Aug 2019 18:20:22 +0200 Subject: Linker error when adding a new source file In-Reply-To: References: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> Message-ID: <5ade3836-8e41-c012-0bba-ffa8fededf99@vanbruegge.de> Thanks Sandy, it seems like that did the job Am 23.08.19 um 18:04 schrieb Sandy Maguire: > Sometimes I see this if I forget to add a file to the > `exposed-modules` field of the cabal file. You might be running into that? > > On Fri, Aug 23, 2019 at 10:03 AM Jan van Brügge > wrote: > > Hi, > > in order to clean up my code, I've moved a bunch of stuff to a new > source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But > when trying to compile a clean build of GHC, I get a linker error: > > ``` > > | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1 > more) => _build/stage0/bin/ghc > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814): > error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578): > error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' > _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8): > error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure' > collect2: Fehler: ld gab 1 als Ende-Status zurück > `gcc' failed in phase `Linker'. (Exit code: 1) > > ``` > > I had a look at the Wiki including the FAQ, but did not fine anything > about that topic. Does someone know what I have to do for this to > work? > > Cheers, > Jan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > I'm currently traveling the world, sleeping on people's couches and > doing full-time collaboration on Haskell projects. If this seems > interesting to you, please consider signing up as a > host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From alp at well-typed.com Fri Aug 23 16:27:48 2019 From: alp at well-typed.com (Alp Mestanogullari) Date: Fri, 23 Aug 2019 18:27:48 +0200 Subject: Linker error when adding a new source file In-Reply-To: <5ade3836-8e41-c012-0bba-ffa8fededf99@vanbruegge.de> References: <60b332f0-7d19-d904-5e17-758e975f211d@vanbruegge.de> <5ade3836-8e41-c012-0bba-ffa8fededf99@vanbruegge.de> Message-ID: <488fe5a5-73c6-cb0c-23df-aca58934cb2d@well-typed.com> Right, Hadrian knows which modules to compile and link by looking at the .cabal files -- so when you omit a module from the .cabal file Hadrian will fail to build the corresponding library or executable correctly. On 23/08/2019 18:20, Jan van Brügge wrote: > > Thanks Sandy, it seems like that did the job > > Am 23.08.19 um 18:04 schrieb Sandy Maguire: >> Sometimes I see this if I forget to add a file to the >> `exposed-modules` field of the cabal file. You might be running into >> that? >> >> On Fri, Aug 23, 2019 at 10:03 AM Jan van Brügge > > wrote: >> >> Hi, >> >> in order to clean up my code, I've moved a bunch of stuff to a new >> source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But >> when trying to compile a clean build of GHC, I get a linker error: >> >> ``` >> >> | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1 >> more) => _build/stage0/bin/ghc >> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814): >> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' >> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578): >> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure' >> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8): >> error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure' >> collect2: Fehler: ld gab 1 als Ende-Status zurück >> `gcc' failed in phase `Linker'. (Exit code: 1) >> >> ``` >> >> I had a look at the Wiki including the FAQ, but did not fine anything >> about that topic. Does someone know what I have to do for this to >> work? >> >> Cheers, >> Jan >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> >> -- >> I'm currently traveling the world, sleeping on people's couches and >> doing full-time collaboration on Haskell projects. If this seems >> interesting to you, please consider signing up as a host! >> https://isovector.github.io/erdos/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Aug 23 18:39:58 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 23 Aug 2019 18:39:58 +0000 Subject: Typechecker plugins and BuiltInSynFamily In-Reply-To: References: Message-ID: Alexis What you suggest sounds like an excellent idea. For some plugins, being able to extend the range of “built in” families might be all the plugin needs to do! (For others it might need to do some solving too.) I’ve thought a bit about how to achieve this: * The most direct thing would be for the solver, when initialised (via tcPluginInit), to return a [(Name, BuiltinSynFamily)] pairs as well as the solver and plugin-stop action. See Note 1 below. * Then the typechecker can keep (in tcg_plugins) a little (NameEnv BuiltinSynFamily), gotten by accumulating all the lists from all the plugins. * Now, in three places we need to look up in that env: * TcInteract.improveLocalFunEqs – see the use of sfInteractInert * TcInteract.improve_top_fun_eqs – see the use of sfInteractTop * FamInstEnv.reduceTyFamApp_maybe – see the use of isBuildintSynFamTyCon_maybe The easiest thing might be to extend the type FamInstEnvs to be a triple, with the (NameEnv BuiltinSymFamily) as one of the components. That would make it possible for a plugin to “register” any number of type families as having a BuiltinSynFamily, which allows all the type-family matching and reduction to be done by code written by the plugin author. (Maybe “built in” is a misnomer... perhaps “magic” or something would be less misleading.) I don’t think this would be hard, and for the things it worked for it’d be much much better, I think. If anyone wants to have a go, I’m happy to advise. NB: if we do this for type families, we should probably do it for * Type classes (so the a plugin could implement new class-instance behaviour that needs code; c.f. the current ClsInst.matchGlobalInst which dispatches off to functions that handle the current built-in cases: Typeable, Coercible, HasField, etc. * CoreRules (so that a plugin could could add new BuitinRules) Simon Note 1. TcPlugin is defined oddly: data TcPlugin = forall s. TcPlugin { tcPluginInit :: TcPluginM s -- ^ Initialize plugin, when entering type-checker. , tcPluginSolve :: s -> TcPluginSolver -- ^ Solve some constraints. -- TODO: WRITE MORE DETAILS ON HOW THIS WORKS. , tcPluginStop :: s -> TcPluginM () -- ^ Clean up after the plugin, when exiting the type-checker. } What is that bizarre existential ‘s’? All we can do is to apply tcPluginSolve and tcPluginStop to it, which happens immediately, in TcRnDriver.withTcPlugins. So I think it’d make more sense and simpler thus type TcPlugin = TcM (TcPluginSolver, TcM ()) So, run that TcM action to initialise it; the action returns a solver, and a stop action. Neither initialisation nor stop action need to access that EvBindsVar. You could make that tuple into a record with named fields. From: ghc-devs On Behalf Of Alexis King Sent: 20 August 2019 09:17 To: ghc-devs at haskell.org Subject: Typechecker plugins and BuiltInSynFamily Hello all, As I’ve been dabbling with typechecker plugins, I’ve found myself primarily using them to define new “magic” type families, and I don’t think I’m alone—Sandy Maguire recently released the magic-tyfams package for precisely that purpose. However, I can’t help but notice that GHC already has good internal support for such type families via BuiltInSynFamily and CoAxiomRule, which are mostly used to implement operations on Nats. As a plugin author, I would love to be able to use that functionality directly instead of being forced to reimplement it myself, for two big reasons: 1. AxiomRuleCo provides significantly more safety from -dcore-lint than UnivCo, but UnivCo is currently the only way to provide evidence for plugin-solved families. 2. The sfInteractTop and sfInteractInert fields of BuiltInSynFamily make it easy to support improvement for custom type families, which I believe would take a non-trivial amount of tricky code to get right using the current typechecker plugin API. Given the above, I started wondering if it is possible to define a BuiltInSynFamily from inside a plugin or, failing that, to modify GHC to expose that functionality to typechecker plugin authors. I am not familiar with GHC’s internals, but in my brief reading of the source code, the following two things seem like the trickiest obstacles: 1. BuiltInSynFamily TyCons need to be injected into the initial name cache, since otherwise those names will get resolved to their ordinary, non-built-in counterparts (e.g. the ordinary open type families defined in GHC.TypeLits). 2. Since CoAxiomRule values actually have functions inside them, they can’t be serialized into interface files. Therefore, it looks like GHC currently maintains a hardcoded list of all the known CoAxiomRules, and tcIfaceCoAxiomRule just searches for a value in that list using a well-known (i.e. not in any way namespaced!) string. I am not knowledgable enough about GHC to say how hard overcoming either of those issues would be. Point 1 seems possible to achieve by arranging for plugins to export the built-in names they want to define and propagating those to the initial name cache, but I don’t know enough about how plugins are loaded to know if that would create any possible circular dependencies (i.e. does the name cache need to already exist in order to load a plugin in the first place?). Point 2 seems harder. My gut instinct is that it could probably be overcome by somehow storing a reference to the plugin that defined the CoAxiomRule in the interface file (by, for example, storing its package-qualified module name), but I’m not immediately certain when that reference should be resolved to the actual CoAxiomRule value. It also presumably needs to complain if the necessary plugin is not actually loaded when the CoAxiomRule needs to be resolved! I’m willing to try my hand at experimenting with an implementation of this if someone can give me a couple pointers on where to start on the above two issues (assuming people don’t think it’s a bad idea to do it at all). Any advice would be appreciated! Thanks, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sat Aug 24 20:00:41 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sat, 24 Aug 2019 21:00:41 +0100 Subject: Tip for profiling GHC Message-ID: Hi all, I thought for a long while that if you wanted to profile GHC itself then you had to build a profiled version of HEAD. However, this isn't true. The easiest way to profile GHC itself is to write a simple GHC API application and compile that with profiling. Normal GHC distributions come with profiling versions of the ghc library so you can use this to profile released versions of the compiler. For example, here is a simple program that Andreas created when I suggested he did this. https://gist.github.com/mpickering/e13f343f2b35b51693d15582180b1c02 You can specify whatever options you want to start the session, you might imagine compiling a package with cabal and copying the command line flags from there in order to set up a session correctly. Cheers, Matt From ml at stefansf.de Sun Aug 25 10:39:03 2019 From: ml at stefansf.de (Stefan Schulze Frielinghaus) Date: Sun, 25 Aug 2019 12:39:03 +0200 Subject: Cabal reports mismatched interface file ways dyn In-Reply-To: <20190711090655.GA1481@dyn-9-152-222-29.boeblingen.de.ibm.com> References: <20190711090655.GA1481@dyn-9-152-222-29.boeblingen.de.ibm.com> Message-ID: <20190825103903.GA8670@localhost.localdomain> I'm resurrecting an old thread because I thought that this might also be helpful for others. The problem I encountered was a result of old build artifacts which didn't get cleaned up. Assume you have build GHC before and now you changed something e.g. the LLVM compiler. Thus the build system won't detect automatically that you have to recompile and therefore you want to go for a full build by doing a clean in the first place. $ make maintainer-clean $ git status --ignored ... Ignored files: libraries/ghc-heap/dist-boot ... The maintainer-clean target does not remove all build artifacts. Thus it is more save to perform a `git clean -dfx`. In my case this solved the problem. Cheers, Stefan From ben at well-typed.com Mon Aug 26 09:25:12 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 26 Aug 2019 05:25:12 -0400 Subject: [ANNOUNCE] GHC 8.8.1 is now available Message-ID: <87zhjwl1mx.fsf@smart-cactus.org> Hello everyone, The GHC team is pleased to announce the release candidate for GHC 8.8.1. The source distribution, binary distributions, and documentation are available at https://downloads.haskell.org/ghc/8.8.1 This release is the culmination of over 3000 commits by over one hundred contributors and has several new features and numerous bug fixes relative to GHC 8.6: * Visible kind applications are now supported (GHC Proposal #15) * Profiling now works correctly on 64-bit Windows (although still may be problematic on 32-bit Windows due to platform limitations; see #15934) * A new code layout algorithm for amd64's native code generator significantly improving the runtime performance of some kernels * The introduction of a late lambda-lifting pass which may reduce allocations significantly for some programs. * Further work on Trees That Grow, enabling improved code re-use of the Haskell AST in tooling * Users can write `forall` in more contexts (GHC Proposal #7) * The pattern-match checker is now more precise in the presence of strict fields with uninhabited types. * A comprehensive audit of GHC's memory ordering barriers has been performed, resulting in a number of fixes that should significantly improve the reliability of programs on architectures with weakly-ordered memory models (e.g. PowerPC, many ARM and AArch64 implementations). * A long-standing linker limitation rendering GHCi unusable with projects with cyclic symbol dependencies has been fixed (#13786) * Further work on the Hadrian build system * Countless miscellaneous bug-fixes Unfortunately, due to a build issue (#17108) found late in the release process i386 Windows builds are currently unavailable. These will be provided in the coming weeks. As always, if anything looks amiss do let us know. Happy compiling! Cheers, - Ben [1] https://downloads.haskell.org/ghc/8.8.1/docs/html/users_guide/8.8.1-notes.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From sam.halliday at gmail.com Tue Aug 27 14:11:09 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Tue, 27 Aug 2019 15:11:09 +0100 Subject: seeing home module dependencies Message-ID: <87zhjuog02.fsf@gmail.com> Hello all, I am writing a tool that uses the ghc api. It works great when I invoke it on user library code but not for test (and executable) components. Consider a very simple user project with this layout lib/Foo.hs lib/Bar.hs test/FooTest.hs test/TestUtils.hs Let's say that Foo imports from Bar and FooTest imports from TestUtils. To start my tool, which for all intents and purposes is just loading a file into ghc, I obtain the ghc flags from cabal's v2-repl (this is the hie-bios hack). I've been sure to make sure that I grab the flags associated to the relevant component. I feel that I need to justify why I'm doing it this way instead of cabal envfiles: they do not include language extensions and other things of that nature, and don't separate library/test/etc. Let's say the user has compiled the project, with tests enabled. The flags for the library might look something like ... -this-unit-id example-1.0-inplace -hide-all-packages -Wmissing-home-modules -no-user-package-db -package-db -/home/me/.cabal/store/ghc-8.6.5/package.db -package-db -/home/me/example/dist-newstyle/packagedb/ghc-8.6.5 --package-db /home/me/example/dist-newstyle/build/x86_64-linux/ghc-8.6.5/example-1.0/noopt/package.conf.inplace -package-id transformers-0.5.6.2 ... which all makes sense. If I use these flags then loading Foo.hs will fail because it cannot find Bar. These modules are needed for compilation but not listed in your .cabal file's other-modules: Bar Of course I could suppress this by ignoring warnings, but I want to see those modules.... so I have a workaround! I hacked it by changing the -this-unit-id to a -package-id and it all works great because the compiler already has the interface files for Bar in the packagedb. My tool doesn't write any object files, so I think this is safe. However, when I come to running my tool on test directories, there is no -this-unit-id from v2-repl. I know what the package-id is (from Cabal's plan.json) but it doesn't exist in my packagedb, so I can't re-use my hack. These modules are needed for compilation but not listed in your .cabal file's other-modules: TestUtils And I'm back where I started. So my question is: which flags do I need to provide to the ghc api so that a module can see the interfaces of other modules in the same component? I'm guessing my -this-unit-id hack is horrible and I shouldn't be doing that. What is the correct thing to do? -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From klebinger.andreas at gmx.at Tue Aug 27 16:10:30 2019 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Tue, 27 Aug 2019 18:10:30 +0200 Subject: GHC: Policy on -O flags? Message-ID: Hello ghc-devs and haskell users. I'm looking for opinions on when an optimization should be enabled by default. -O is currently the base line for an optimized build. -O2 adds around 10-20% compile time for a few % (around 2% if I remember correctly) in performance for most things. The question is now if I implement a new optimization, making code R% faster but slowing down the compiler down by C% at which point should an optimization be: * Enabled by default (-O) * Enabled only at -O2 * Disabled by default Cheap always beneficial things make sense for -O Expensive optimizations which add little make sense for -O2 But where exactly is the line here? How much compile time is runtime worth? If something slows down the compiler by 1%/2%/5% and speeds up code by 0.5%/1%/2% which combinations make sense for -O, -O2? Can there even be a good policy with the -O/-O2 split? Personally I generally want code to either: * Typecheck/Run at all (-O0, -fno-code, repl) * Not blow through all my RAM when adding a few Ints while developing: -O ? * Make a reasonable tradeoff between runtime/compiletime: -O ? * Give me all you got: -O2 (-O99999) The use case for -O0 is rather clear, so is -O2. But what do people consider the use case for -O What trade offs seem acceptable to you as a user of GHC? Is it ok for -O to become slower for faster runtimes? How much slower? Should all new improvements which might slow down compilation be pushed to -O2? Or does an ideal solution add new flags? Tell me what do you think. Cheers, Andreas Klebinger From allbery.b at gmail.com Tue Aug 27 16:12:47 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 27 Aug 2019 12:12:47 -0400 Subject: GHC: Policy on -O flags? In-Reply-To: References: Message-ID: I think at first you just give it a -f flag, and let experience determine whether it should be part of -O or -O2. On Tue, Aug 27, 2019 at 12:10 PM Andreas Klebinger wrote: > Hello ghc-devs and haskell users. > > I'm looking for opinions on when an optimization should be enabled by > default. > > -O is currently the base line for an optimized build. > -O2 adds around 10-20% compile time for a few % (around 2% if I remember > correctly) in performance for most things. > > The question is now if I implement a new optimization, making code R% > faster but slowing > down the compiler down by C% at which point should an optimization be: > > * Enabled by default (-O) > * Enabled only at -O2 > * Disabled by default > > Cheap always beneficial things make sense for -O > Expensive optimizations which add little make sense for -O2 > > But where exactly is the line here? > How much compile time is runtime worth? > > If something slows down the compiler by 1%/2%/5% > and speeds up code by 0.5%/1%/2% which combinations make sense > for -O, -O2? > > Can there even be a good policy with the -O/-O2 split? > > Personally I generally want code to either: > * Typecheck/Run at all (-O0, -fno-code, repl) > * Not blow through all my RAM when adding a few Ints while developing: -O ? > * Make a reasonable tradeoff between runtime/compiletime: -O ? > * Give me all you got: -O2 (-O99999) > > The use case for -O0 is rather clear, so is -O2. > But what do people consider the use case for -O > > What trade offs seem acceptable to you as a user of GHC? > > Is it ok for -O to become slower for faster runtimes? How much slower? > Should all new improvements which might slow down compilation > be pushed to -O2? > > Or does an ideal solution add new flags? > Tell me what do you think. > > Cheers, > Andreas Klebinger > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Aug 27 16:17:38 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 27 Aug 2019 17:17:38 +0100 Subject: [Haskell-cafe] GHC: Policy on -O flags? In-Reply-To: References: Message-ID: `-O2` is not a very rationally considered flag from what I understand. It only enables `-fspec-constr` and `-fliberate-case`. The later also triggers another simplification pass. `-fspec-constr` is quite limited as it only works in the definition module. I have never trusted it to optimise my code. I have seen `-fliberate-case` achieve something but more often the improvement comes from the extra simplification pass. Therefore I wouldn't try to read the tea leaves too closely. There are probably flags in `-O` which affect compile time more but have less benefit. Matt On Tue, Aug 27, 2019 at 5:13 PM Brandon Allbery wrote: > > I think at first you just give it a -f flag, and let experience determine whether it should be part of -O or -O2. > > On Tue, Aug 27, 2019 at 12:10 PM Andreas Klebinger wrote: >> >> Hello ghc-devs and haskell users. >> >> I'm looking for opinions on when an optimization should be enabled by >> default. >> >> -O is currently the base line for an optimized build. >> -O2 adds around 10-20% compile time for a few % (around 2% if I remember >> correctly) in performance for most things. >> >> The question is now if I implement a new optimization, making code R% >> faster but slowing >> down the compiler down by C% at which point should an optimization be: >> >> * Enabled by default (-O) >> * Enabled only at -O2 >> * Disabled by default >> >> Cheap always beneficial things make sense for -O >> Expensive optimizations which add little make sense for -O2 >> >> But where exactly is the line here? >> How much compile time is runtime worth? >> >> If something slows down the compiler by 1%/2%/5% >> and speeds up code by 0.5%/1%/2% which combinations make sense >> for -O, -O2? >> >> Can there even be a good policy with the -O/-O2 split? >> >> Personally I generally want code to either: >> * Typecheck/Run at all (-O0, -fno-code, repl) >> * Not blow through all my RAM when adding a few Ints while developing: -O ? >> * Make a reasonable tradeoff between runtime/compiletime: -O ? >> * Give me all you got: -O2 (-O99999) >> >> The use case for -O0 is rather clear, so is -O2. >> But what do people consider the use case for -O >> >> What trade offs seem acceptable to you as a user of GHC? >> >> Is it ok for -O to become slower for faster runtimes? How much slower? >> Should all new improvements which might slow down compilation >> be pushed to -O2? >> >> Or does an ideal solution add new flags? >> Tell me what do you think. >> >> Cheers, >> Andreas Klebinger >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. From sgraf1337 at gmail.com Tue Aug 27 16:21:45 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Tue, 27 Aug 2019 17:21:45 +0100 Subject: GHC: Policy on -O flags? In-Reply-To: References: Message-ID: Hi, I used to think that the policy for being eligible for -O1 is that C must be non-positive, e.g. that the compile times don't suffer at all. Everything beyond that (well, given that R is positive) should be -O2 only. There's precedent at least for Late Lambda Lifting (which is only run for -O2) here: https://phabricator.haskell.org/D5224#147959. Upon re-reading I see that Simon Marlow identified C=1 as the hard threshold. Maybe there are other cases as well? Personally, I like C=0 for the fact that it means the compiler will only get faster over time. And any reasonably tuned release executable will do -O2 anyway. Cheers, Sebastian Am Di., 27. Aug. 2019 um 17:11 Uhr schrieb Andreas Klebinger < klebinger.andreas at gmx.at>: > Hello ghc-devs and haskell users. > > I'm looking for opinions on when an optimization should be enabled by > default. > > -O is currently the base line for an optimized build. > -O2 adds around 10-20% compile time for a few % (around 2% if I remember > correctly) in performance for most things. > > The question is now if I implement a new optimization, making code R% > faster but slowing > down the compiler down by C% at which point should an optimization be: > > * Enabled by default (-O) > * Enabled only at -O2 > * Disabled by default > > Cheap always beneficial things make sense for -O > Expensive optimizations which add little make sense for -O2 > > But where exactly is the line here? > How much compile time is runtime worth? > > If something slows down the compiler by 1%/2%/5% > and speeds up code by 0.5%/1%/2% which combinations make sense > for -O, -O2? > > Can there even be a good policy with the -O/-O2 split? > > Personally I generally want code to either: > * Typecheck/Run at all (-O0, -fno-code, repl) > * Not blow through all my RAM when adding a few Ints while developing: -O ? > * Make a reasonable tradeoff between runtime/compiletime: -O ? > * Give me all you got: -O2 (-O99999) > > The use case for -O0 is rather clear, so is -O2. > But what do people consider the use case for -O > > What trade offs seem acceptable to you as a user of GHC? > > Is it ok for -O to become slower for faster runtimes? How much slower? > Should all new improvements which might slow down compilation > be pushed to -O2? > > Or does an ideal solution add new flags? > Tell me what do you think. > > Cheers, > Andreas Klebinger > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Tue Aug 27 17:35:03 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Tue, 27 Aug 2019 18:35:03 +0100 Subject: seeing home module dependencies In-Reply-To: <87zhjuog02.fsf@gmail.com> References: <87zhjuog02.fsf@gmail.com> Message-ID: <87zhju1ph4.fsf@gmail.com> Rahul Muttineni has given me some guidance. I wasn't loading all the module names that are provided as arguments to ghc. Now I am. I'm now able to load my file and do the analysis I want. But it is *much* slower than when I treat the home modules as a package. I presume it is slower because lots of modules are now being loaded instead of just the one I care about. I'm even seeing errors such as "Variable not in scope" in some of the modules that I don't care about (loading seems to do a lot of work) and presumably compile errors in downstream modules will impact me as well. 1. Is there a way to tell ghc to prefer interface files instead of source files, if it finds them, for a TargetModule? 2. Is it possible to prune the list of targets such that it is only the file I am interested in and its dependencies? Ideally I'd like to load just the file I care about and have everything else automatically load. PS: another hack that we've been discussing is to create and register an ad-hoc packagedb for test (and executable) components. It doesn't seem to be trivial to do this, unless cabal-install has a hidden command that can create conf files for an arbitrary component. The advantage here is that I can load only the file I care about, and ghc takes care of automatically loading all the dependencies (even if they are in the same package). Of course, those dependencies must have been compiled but that's the UX I'm going for. Sam Halliday writes: > Hello all, > > I am writing a tool that uses the ghc api. It works great when I invoke > it on user library code but not for test (and executable) components. > > Consider a very simple user project with this layout > > lib/Foo.hs > lib/Bar.hs > test/FooTest.hs > test/TestUtils.hs > > Let's say that Foo imports from Bar and FooTest imports from TestUtils. > > To start my tool, which for all intents and purposes is just loading a > file into ghc, I obtain the ghc flags from cabal's v2-repl (this is the > hie-bios hack). I've been sure to make sure that I grab the flags > associated to the relevant component. I feel that I need to justify why > I'm doing it this way instead of cabal envfiles: they do not include > language extensions and other things of that nature, and don't separate > library/test/etc. > > > Let's say the user has compiled the project, with tests enabled. > > The flags for the library might look something like > > ... > -this-unit-id example-1.0-inplace -hide-all-packages > -Wmissing-home-modules -no-user-package-db > -package-db -/home/me/.cabal/store/ghc-8.6.5/package.db > -package-db -/home/me/example/dist-newstyle/packagedb/ghc-8.6.5 > --package-db /home/me/example/dist-newstyle/build/x86_64-linux/ghc-8.6.5/example-1.0/noopt/package.conf.inplace > -package-id transformers-0.5.6.2 > ... > > which all makes sense. If I use these flags then loading Foo.hs will > fail because it cannot find Bar. > > These modules are needed for compilation but not > listed in your .cabal file's other-modules: > Bar > > Of course I could suppress this by ignoring warnings, but I want to see > those modules.... so I have a workaround! I hacked it by changing the > -this-unit-id to a -package-id and it all works great because the > compiler already has the interface files for Bar in the packagedb. My > tool doesn't write any object files, so I think this is safe. > > > However, when I come to running my tool on test directories, there is no > -this-unit-id from v2-repl. I know what the package-id is (from Cabal's > plan.json) but it doesn't exist in my packagedb, so I can't re-use my > hack. > > These modules are needed for compilation but not > listed in your .cabal file's other-modules: > TestUtils > > And I'm back where I started. > > So my question is: which flags do I need to provide to the ghc api so > that a module can see the interfaces of other modules in the same > component? I'm guessing my -this-unit-id hack is horrible and I > shouldn't be doing that. What is the correct thing to do? > > > -- > Best regards, > Sam -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From sam.halliday at gmail.com Tue Aug 27 17:39:41 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Tue, 27 Aug 2019 18:39:41 +0100 Subject: seeing home module dependencies In-Reply-To: <87zhju1ph4.fsf@gmail.com> References: <87zhjuog02.fsf@gmail.com> <87zhju1ph4.fsf@gmail.com> Message-ID: <877e6yzew2.fsf@gmail.com> Please disregard the second question, I forgot about LoadUpTo. Doh!> > 1. Is there a way to tell ghc to prefer interface files instead of > source files, if it finds them, for a TargetModule? > > 2. Is it possible to prune the list of targets ... -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From dxld at darkboxed.org Tue Aug 27 17:48:41 2019 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Tue, 27 Aug 2019 19:48:41 +0200 Subject: seeing home module dependencies In-Reply-To: <87zhju1ph4.fsf@gmail.com> References: <87zhjuog02.fsf@gmail.com> <87zhju1ph4.fsf@gmail.com> Message-ID: <20190827174841.GA21682@Eli.lan> Hi, On Tue, Aug 27, 2019 at 06:35:03PM +0100, Sam Halliday wrote: > 1. Is there a way to tell ghc to prefer interface files instead of > source files, if it finds them, for a TargetModule? I think targetAllowObjCode=True [1] should do that. If you're using gussTarget and your module name doesn't have a '*' at the beginning that should be the default though. [1]: https://hackage.haskell.org/package/ghc-8.6.5/docs/GHC.html#v:targetAllowObjCode --Daniel From sam.halliday at gmail.com Tue Aug 27 17:56:39 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Tue, 27 Aug 2019 18:56:39 +0100 Subject: seeing home module dependencies In-Reply-To: <20190827174841.GA21682@Eli.lan> References: <87zhjuog02.fsf@gmail.com> <87zhju1ph4.fsf@gmail.com> <20190827174841.GA21682@Eli.lan> Message-ID: <874l22ze3s.fsf@gmail.com> Ha! I knew about that flag but thought it was talking about OUTPUT object code. Thank you so much, I will try that now. Daniel Gröber writes: > Hi, > > On Tue, Aug 27, 2019 at 06:35:03PM +0100, Sam Halliday wrote: >> 1. Is there a way to tell ghc to prefer interface files instead of >> source files, if it finds them, for a TargetModule? > > I think targetAllowObjCode=True [1] should do that. If you're using > gussTarget and your module name doesn't have a '*' at the beginning > that should be the default though. > > [1]: https://hackage.haskell.org/package/ghc-8.6.5/docs/GHC.html#v:targetAllowObjCode > > --Daniel -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From jan at vanbruegge.de Tue Aug 27 22:57:29 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Wed, 28 Aug 2019 00:57:29 +0200 Subject: A few questions about BuiltInSynFamily/CoAxiomRules Message-ID: <79adfdec-31f7-360a-0637-6bbedf4d0c35@vanbruegge.de> Hi lovely people, sorry if my recent emails are getting annoying. In the last few days I refactored my code to use `BuiltInSynFamily` and `CoAxiomRule` to replace what was very ad-hoc code. So far so easy. But I have a few questions due to sparse documentation. First, about `BuiltInSynFamily`. It is a record of three functions. From what I can tell by looking at `TcTypeNats`, the two `interact` functions are used to solve the argument parts of builtin families based on known results. `interactTop` seems to simply constraints on their own, `interactInert` seems to simplify based on being given two similar contraints. By big questions is what exactly `matchFam` does. The argument seems to be the arguments to the type family, but the tuple in the result is less clear. The axiom rule is the proof witness, the second argument I guess is the type arguments you actually care about? What is this used for? The last one should be the result type. Attached to that, what are the garantuees about the list of types that you get? I assumed at first they were all flattened, but my other type family is not. I am talking about this piece of code here: ``` matchFamRnil :: [Type] -> Maybe (CoAxiomRule, [Type], Type) matchFamRnil [] = Just (axRnilDef, [], mkRowTy k v [])     where binders = mkTemplateKindTyConBinders [liftedTypeKind, liftedTypeKind]           [k, v] = map (mkTyVarTy . binderVar) binders matchFamRnil _ = Nothing matchFamRowExt :: [Type] -> Maybe (CoAxiomRule, [Type], Type) matchFamRowExt [_k, _v, x, y, row@(RowTy (MkRowTy k v flds))] = Just (axRowExtDef, [x, y, row], RowTy (MkRowTy k v $ (x, y):flds)) matchFamRowExt [k, v, x, y, _rnil] = Just (axRowExtRNilDef, [k, v], RowTy (MkRowTy k v [(x, y)])) matchFamRowExt _ = Nothing ``` I needed an extra `_rnil` case  in `matchFamRowExt` because `RowExt "foo" Int RNil` did not match the first pattern match (the dumped list I got was `[Symbol, Type, "foo", Int, RNil]`). Also, is there a better way to conjure up polykinded variables out of the blue or is this fine? I thought about leaving off that info from the type, but then I would have the same question when trying to implement `typeKind` or `tcTypeKind` (for a non-empty row I can use the kinds of the head of the list, for an empty one I would need polykinded variables) My last question is about CoAxiomRules. The note says that they represent "forall as. (r1 ~ r2, s1 ~ s2) => t1 ~ t2", but it is not clear for me in what relation the constraints on the left are with the right. From the typeNats it looks like `t1` is the type family applied to the `_1` arguments and `t2` is the calculated result using the `_2` arguments. Why are we not getting just a list of types as inputs? Is the `_1` always a unification/type variable and not really a type you can use to calculate stuff? Also, if I extrapolate my observations to type families without arguments, I would assume that I do not have constraints on the left hand side as `t1` would be the family appied to the arguments (none) and `t2` would be the calculated result from the `_2` args (I do not need anything to return an empty row). Is this correct or am I horribly wrong? Thanks for your time listening to my questions, maybe I can open a PR with a bit of documentation with eventual answers. Cheers, Jan From iavor.diatchki at gmail.com Wed Aug 28 00:12:27 2019 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 27 Aug 2019 17:12:27 -0700 Subject: A few questions about BuiltInSynFamily/CoAxiomRules In-Reply-To: <79adfdec-31f7-360a-0637-6bbedf4d0c35@vanbruegge.de> References: <79adfdec-31f7-360a-0637-6bbedf4d0c35@vanbruegge.de> Message-ID: Hello Jan, I think I added these sometime ago, and here is what I recall: * `sfInteractTop` and `sfInteractInert` are to help with type inference: they generate new "derived" constraints, which are used by GHC to instantiate unification variables. - `sfInteractTop` is for facts you can get just by looking at a single constraint. For example, if we see `(x + 5) ~ 8` we can generate a new derived constraint `x ~ 3` - `sfInteractIntert` is for facts that you can get by looking at two constraints together. For example, if we see `(x + a ~ z, x + b ~ x)` we can generate new derived constraint `a ~ b`. - since "derived" constraint do not need evidence, these are just equations. * `sfMatchFun` is used to evaluate built-in type families. For example if we see `5 + 3`, we'd like ghc to reduce this to `8`. - you are correct that the input list are the arguments (e.g., `[5,3]`) - the result is `Just` if we can perform an evaluation step, and the 3-tuple contains: 1. the axiom rule to be used in the evidence (e.g. "AddDef") 2. indexes for the axiom rule (e.g.,"[5,3]") (see below for more info) 3. the result of evaluation (e.g., "8") Part 2 is probably the most confusing, and I think it might have changed a bit since I did it, or perhaps I just forgot some of the details. Either way, this is best explained with an example. The question is "What should be the evidence for `3 + 5 ~ 8`?". In ordinary math one could do a proof by induction, but we don't really represent numbers in the unary notation and we don't have a way to represent inductive proofs in GHC, so instead we decided to have an indexed family of axioms, kind of like this: * AddDef(3,5) : `(3 + 5) ~ 8` * AddDef(2,1) : `(2 + 1) ~ 3` * ... So the types in the second element of the tuple are these indexes that tell you how to instantiate the rule. This is the basic idea, but axioms are encoded in a slightly different way---instead of being parameterized by just types, they are parameterized by equalities (the reason for this is what I seem to have forgotten, or perhaps it changed). So the `CoAxiomRules` actually look like this: * AddDef: (x ~ 3, y ~ 5) => (x + y ~ 8) When we evaluate we always seem to be passing trivial (i.e., "refl") equalities constructed using the second entry in the tuple. For example, if `sfMathcFun` returns `Just (axiom, [t1,t2], result)`, then the result will be `result`, and the evidence that `MyFun args ~ result` will be `axiom (refl @ t1, refl @ t2)` You can see the actual code for this if you look for `sfMatchFun` in `types/FamInstEnv.hs`. I hope this makes sense, but please ask away if it is unclear. And, of course, it would be great to document this properly. -Iavor On Tue, Aug 27, 2019 at 3:57 PM Jan van Brügge wrote: > > Hi lovely people, > > sorry if my recent emails are getting annoying. > > In the last few days I refactored my code to use `BuiltInSynFamily` and > `CoAxiomRule` to replace what was very ad-hoc code. So far so easy. But > I have a few questions due to sparse documentation. > > First, about `BuiltInSynFamily`. It is a record of three functions. From > what I can tell by looking at `TcTypeNats`, the two `interact` functions > are used to solve the argument parts of builtin families based on known > results. `interactTop` seems to simply constraints on their own, > `interactInert` seems to simplify based on being given two similar > contraints. > > By big questions is what exactly `matchFam` does. The argument seems to > be the arguments to the type family, but the tuple in the result is less > clear. The axiom rule is the proof witness, the second argument I guess > is the type arguments you actually care about? What is this used for? > The last one should be the result type. > > Attached to that, what are the garantuees about the list of types that > you get? I assumed at first they were all flattened, but my other type > family is not. I am talking about this piece of code here: > > ``` > matchFamRnil :: [Type] -> Maybe (CoAxiomRule, [Type], Type) > matchFamRnil [] = Just (axRnilDef, [], mkRowTy k v []) > where binders = mkTemplateKindTyConBinders [liftedTypeKind, > liftedTypeKind] > [k, v] = map (mkTyVarTy . binderVar) binders > matchFamRnil _ = Nothing > > > matchFamRowExt :: [Type] -> Maybe (CoAxiomRule, [Type], Type) > matchFamRowExt [_k, _v, x, y, row@(RowTy (MkRowTy k v flds))] = Just > (axRowExtDef, [x, y, row], RowTy (MkRowTy k v $ (x, y):flds)) > matchFamRowExt [k, v, x, y, _rnil] = Just (axRowExtRNilDef, [k, v], > RowTy (MkRowTy k v [(x, y)])) > matchFamRowExt _ = Nothing > > ``` > > I needed an extra `_rnil` case in `matchFamRowExt` because `RowExt > "foo" Int RNil` did not match the first pattern match (the dumped list I > got was `[Symbol, Type, "foo", Int, RNil]`). Also, is there a better way > to conjure up polykinded variables out of the blue or is this fine? I > thought about leaving off that info from the type, but then I would have > the same question when trying to implement `typeKind` or `tcTypeKind` > (for a non-empty row I can use the kinds of the head of the list, for an > empty one I would need polykinded variables) > > My last question is about CoAxiomRules. The note says that they > represent "forall as. (r1 ~ r2, s1 ~ s2) => t1 ~ t2", but it is not > clear for me in what relation the constraints on the left are with the > right. From the typeNats it looks like `t1` is the type family applied > to the `_1` arguments and `t2` is the calculated result using the `_2` > arguments. Why are we not getting just a list of types as inputs? Is the > `_1` always a unification/type variable and not really a type you can > use to calculate stuff? Also, if I extrapolate my observations to type > families without arguments, I would assume that I do not have > constraints on the left hand side as `t1` would be the family appied to > the arguments (none) and `t2` would be the calculated result from the > `_2` args (I do not need anything to return an empty row). Is this > correct or am I horribly wrong? > > > Thanks for your time listening to my questions, maybe I can open a PR > with a bit of documentation with eventual answers. > > Cheers, > Jan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Wed Aug 28 07:52:14 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 28 Aug 2019 07:52:14 +0000 Subject: A few questions about BuiltInSynFamily/CoAxiomRules In-Reply-To: References: <79adfdec-31f7-360a-0637-6bbedf4d0c35@vanbruegge.de> Message-ID: Thanks Iavor for your reply -- all correct I think. Re | This is the basic idea, but axioms are encoded in a slightly different | way---instead of being parameterized by just types, they are | parameterized by equalities (the reason for this is what I seem to have | forgotten, or perhaps it changed). you'll find the reason documented in TyCoRep Note [Coercion axioms applied to coercions] There are other useful Notes in that file about axioms. Jan: if you would care to document some of this in a way that makes sense to you, you could add a new Note. That would help everyone! Simon | -----Original Message----- | From: ghc-devs On Behalf Of Iavor Diatchki | Sent: 28 August 2019 01:12 | To: Jan van Brügge | Cc: ghc-devs at haskell.org | Subject: Re: A few questions about BuiltInSynFamily/CoAxiomRules | | Hello Jan, | | I think I added these sometime ago, and here is what I recall: | | * `sfInteractTop` and `sfInteractInert` are to help with type inference: | they generate new "derived" constraints, which are used by GHC to | instantiate unification variables. | - `sfInteractTop` is for facts you can get just by looking at a | single constraint. For example, if we see `(x + 5) ~ 8` we can generate a | new derived constraint `x ~ 3` | - `sfInteractIntert` is for facts that you can get by looking at | two constraints together. For example, if we see `(x + a ~ z, x + b ~ x)` | we can generate new derived constraint `a ~ b`. | - since "derived" constraint do not need evidence, these are just | equations. | | * `sfMatchFun` is used to evaluate built-in type families. For example | if we see `5 + 3`, we'd like ghc to reduce this to `8`. | - you are correct that the input list are the arguments (e.g., | `[5,3]`) | - the result is `Just` if we can perform an evaluation step, and the | 3-tuple contains: | 1. the axiom rule to be used in the evidence (e.g. "AddDef") | 2. indexes for the axiom rule (e.g.,"[5,3]") (see below for more | info) | 3. the result of evaluation (e.g., "8") | | Part 2 is probably the most confusing, and I think it might have changed a | bit since I did it, or perhaps I just forgot some of the details. Either | way, this is best explained with | an example. The question is "What should be the evidence for `3 + 5 ~ | 8`?". | | In ordinary math one could do a proof by induction, but we don't really | represent numbers in the unary notation and we don't have a way to | represent inductive proofs in GHC, so instead we decided to have an | indexed family of axioms, kind of like this: | | * AddDef(3,5) : `(3 + 5) ~ 8` | * AddDef(2,1) : `(2 + 1) ~ 3` | * ... | | So the types in the second element of the tuple are these indexes that | tell you how to instantiate the rule. | | This is the basic idea, but axioms are encoded in a slightly different | way---instead of being parameterized by just types, they are | parameterized by equalities (the reason for this is what I seem to have | forgotten, or perhaps it changed). | So the `CoAxiomRules` actually look like this: | | * AddDef: (x ~ 3, y ~ 5) => (x + y ~ 8) | | When we evaluate we always seem to be passing trivial (i.e., "refl") | equalities constructed using the second entry in the tuple. For example, | if `sfMathcFun` returns `Just (axiom, [t1,t2], result)`, then the result | will be `result`, and the evidence that `MyFun args ~ result` will be | `axiom (refl @ t1, refl @ t2)` | | You can see the actual code for this if you look for `sfMatchFun` in | `types/FamInstEnv.hs`. | | I hope this makes sense, but please ask away if it is unclear. And, of | course, it would be great to document this properly. | | -Iavor | | | | | | | | | | | | | | | | | | On Tue, Aug 27, 2019 at 3:57 PM Jan van Brügge wrote: | > | > Hi lovely people, | > | > sorry if my recent emails are getting annoying. | > | > In the last few days I refactored my code to use `BuiltInSynFamily` | > and `CoAxiomRule` to replace what was very ad-hoc code. So far so | > easy. But I have a few questions due to sparse documentation. | > | > First, about `BuiltInSynFamily`. It is a record of three functions. | > From what I can tell by looking at `TcTypeNats`, the two `interact` | > functions are used to solve the argument parts of builtin families | > based on known results. `interactTop` seems to simply constraints on | > their own, `interactInert` seems to simplify based on being given two | > similar contraints. | > | > By big questions is what exactly `matchFam` does. The argument seems | > to be the arguments to the type family, but the tuple in the result is | > less clear. The axiom rule is the proof witness, the second argument I | > guess is the type arguments you actually care about? What is this used | for? | > The last one should be the result type. | > | > Attached to that, what are the garantuees about the list of types that | > you get? I assumed at first they were all flattened, but my other type | > family is not. I am talking about this piece of code here: | > | > ``` | > matchFamRnil :: [Type] -> Maybe (CoAxiomRule, [Type], Type) | > matchFamRnil [] = Just (axRnilDef, [], mkRowTy k v []) | > where binders = mkTemplateKindTyConBinders [liftedTypeKind, | > liftedTypeKind] | > [k, v] = map (mkTyVarTy . binderVar) binders matchFamRnil _ | > = Nothing | > | > | > matchFamRowExt :: [Type] -> Maybe (CoAxiomRule, [Type], Type) | > matchFamRowExt [_k, _v, x, y, row@(RowTy (MkRowTy k v flds))] = Just | > (axRowExtDef, [x, y, row], RowTy (MkRowTy k v $ (x, y):flds)) | > matchFamRowExt [k, v, x, y, _rnil] = Just (axRowExtRNilDef, [k, v], | > RowTy (MkRowTy k v [(x, y)])) matchFamRowExt _ = Nothing | > | > ``` | > | > I needed an extra `_rnil` case in `matchFamRowExt` because `RowExt | > "foo" Int RNil` did not match the first pattern match (the dumped list | > I got was `[Symbol, Type, "foo", Int, RNil]`). Also, is there a better | > way to conjure up polykinded variables out of the blue or is this | > fine? I thought about leaving off that info from the type, but then I | > would have the same question when trying to implement `typeKind` or | > `tcTypeKind` (for a non-empty row I can use the kinds of the head of | > the list, for an empty one I would need polykinded variables) | > | > My last question is about CoAxiomRules. The note says that they | > represent "forall as. (r1 ~ r2, s1 ~ s2) => t1 ~ t2", but it is not | > clear for me in what relation the constraints on the left are with the | > right. From the typeNats it looks like `t1` is the type family applied | > to the `_1` arguments and `t2` is the calculated result using the `_2` | > arguments. Why are we not getting just a list of types as inputs? Is | > the `_1` always a unification/type variable and not really a type you | > can use to calculate stuff? Also, if I extrapolate my observations to | > type families without arguments, I would assume that I do not have | > constraints on the left hand side as `t1` would be the family appied | > to the arguments (none) and `t2` would be the calculated result from | > the `_2` args (I do not need anything to return an empty row). Is this | > correct or am I horribly wrong? | > | > | > Thanks for your time listening to my questions, maybe I can open a PR | > with a bit of documentation with eventual answers. | > | > Cheers, | > Jan | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | > haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01 | > %7Csimonpj%40microsoft.com%7C5dd2f56bf336459df67608d72b4c786c%7C72f988 | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C637025479769572505&sdata=VtrN | > FMSiRxfGvDIKDjHYc0Jk9NgUgaRuP07OvZ5qpr8%3D&reserved=0 | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C5dd2f56bf336459df67608d7 | 2b4c786c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637025479769572505&a | mp;sdata=VtrNFMSiRxfGvDIKDjHYc0Jk9NgUgaRuP07OvZ5qpr8%3D&reserved=0 From jan at vanbruegge.de Wed Aug 28 10:37:24 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Wed, 28 Aug 2019 12:37:24 +0200 Subject: A few questions about BuiltInSynFamily/CoAxiomRules In-Reply-To: References: <79adfdec-31f7-360a-0637-6bbedf4d0c35@vanbruegge.de> Message-ID: Thanks, this makes it a lot clearer, it just leaves two questions open: Is there a better way to conjure up polykinded unification/type variables (bonus points if they are not called `k0` and `k1` afterwards)? Why are some type families flattened and some are not (for the last argument of `RowExt` `RNil` is not evaluated, but another `RowExt` is)? Thanks for your help, I will definitely add a new Note there. Cheers, Jan Am 28.08.19 um 09:52 schrieb Simon Peyton Jones: > Thanks Iavor for your reply -- all correct I think. Re > > | This is the basic idea, but axioms are encoded in a slightly different > | way---instead of being parameterized by just types, they are > | parameterized by equalities (the reason for this is what I seem to have > | forgotten, or perhaps it changed). > > you'll find the reason documented in TyCoRep > Note [Coercion axioms applied to coercions] > > There are other useful Notes in that file about axioms. > > Jan: if you would care to document some of this in a way that makes sense to you, you could add a new Note. That would help everyone! > > Simon > > > | -----Original Message----- > | From: ghc-devs On Behalf Of Iavor Diatchki > | Sent: 28 August 2019 01:12 > | To: Jan van Brügge > | Cc: ghc-devs at haskell.org > | Subject: Re: A few questions about BuiltInSynFamily/CoAxiomRules > | > | Hello Jan, > | > | I think I added these sometime ago, and here is what I recall: > | > | * `sfInteractTop` and `sfInteractInert` are to help with type inference: > | they generate new "derived" constraints, which are used by GHC to > | instantiate unification variables. > | - `sfInteractTop` is for facts you can get just by looking at a > | single constraint. For example, if we see `(x + 5) ~ 8` we can generate a > | new derived constraint `x ~ 3` > | - `sfInteractIntert` is for facts that you can get by looking at > | two constraints together. For example, if we see `(x + a ~ z, x + b ~ x)` > | we can generate new derived constraint `a ~ b`. > | - since "derived" constraint do not need evidence, these are just > | equations. > | > | * `sfMatchFun` is used to evaluate built-in type families. For example > | if we see `5 + 3`, we'd like ghc to reduce this to `8`. > | - you are correct that the input list are the arguments (e.g., > | `[5,3]`) > | - the result is `Just` if we can perform an evaluation step, and the > | 3-tuple contains: > | 1. the axiom rule to be used in the evidence (e.g. "AddDef") > | 2. indexes for the axiom rule (e.g.,"[5,3]") (see below for more > | info) > | 3. the result of evaluation (e.g., "8") > | > | Part 2 is probably the most confusing, and I think it might have changed a > | bit since I did it, or perhaps I just forgot some of the details. Either > | way, this is best explained with > | an example. The question is "What should be the evidence for `3 + 5 ~ > | 8`?". > | > | In ordinary math one could do a proof by induction, but we don't really > | represent numbers in the unary notation and we don't have a way to > | represent inductive proofs in GHC, so instead we decided to have an > | indexed family of axioms, kind of like this: > | > | * AddDef(3,5) : `(3 + 5) ~ 8` > | * AddDef(2,1) : `(2 + 1) ~ 3` > | * ... > | > | So the types in the second element of the tuple are these indexes that > | tell you how to instantiate the rule. > | > | This is the basic idea, but axioms are encoded in a slightly different > | way---instead of being parameterized by just types, they are > | parameterized by equalities (the reason for this is what I seem to have > | forgotten, or perhaps it changed). > | So the `CoAxiomRules` actually look like this: > | > | * AddDef: (x ~ 3, y ~ 5) => (x + y ~ 8) > | > | When we evaluate we always seem to be passing trivial (i.e., "refl") > | equalities constructed using the second entry in the tuple. For example, > | if `sfMathcFun` returns `Just (axiom, [t1,t2], result)`, then the result > | will be `result`, and the evidence that `MyFun args ~ result` will be > | `axiom (refl @ t1, refl @ t2)` > | > | You can see the actual code for this if you look for `sfMatchFun` in > | `types/FamInstEnv.hs`. > | > | I hope this makes sense, but please ask away if it is unclear. And, of > | course, it would be great to document this properly. > | > | -Iavor > | > | > | > | > | > | > | > | > | > | > | > | > | > | > | > | > | > | On Tue, Aug 27, 2019 at 3:57 PM Jan van Brügge wrote: > | > > | > Hi lovely people, > | > > | > sorry if my recent emails are getting annoying. > | > > | > In the last few days I refactored my code to use `BuiltInSynFamily` > | > and `CoAxiomRule` to replace what was very ad-hoc code. So far so > | > easy. But I have a few questions due to sparse documentation. > | > > | > First, about `BuiltInSynFamily`. It is a record of three functions. > | > From what I can tell by looking at `TcTypeNats`, the two `interact` > | > functions are used to solve the argument parts of builtin families > | > based on known results. `interactTop` seems to simply constraints on > | > their own, `interactInert` seems to simplify based on being given two > | > similar contraints. > | > > | > By big questions is what exactly `matchFam` does. The argument seems > | > to be the arguments to the type family, but the tuple in the result is > | > less clear. The axiom rule is the proof witness, the second argument I > | > guess is the type arguments you actually care about? What is this used > | for? > | > The last one should be the result type. > | > > | > Attached to that, what are the garantuees about the list of types that > | > you get? I assumed at first they were all flattened, but my other type > | > family is not. I am talking about this piece of code here: > | > > | > ``` > | > matchFamRnil :: [Type] -> Maybe (CoAxiomRule, [Type], Type) > | > matchFamRnil [] = Just (axRnilDef, [], mkRowTy k v []) > | > where binders = mkTemplateKindTyConBinders [liftedTypeKind, > | > liftedTypeKind] > | > [k, v] = map (mkTyVarTy . binderVar) binders matchFamRnil _ > | > = Nothing > | > > | > > | > matchFamRowExt :: [Type] -> Maybe (CoAxiomRule, [Type], Type) > | > matchFamRowExt [_k, _v, x, y, row@(RowTy (MkRowTy k v flds))] = Just > | > (axRowExtDef, [x, y, row], RowTy (MkRowTy k v $ (x, y):flds)) > | > matchFamRowExt [k, v, x, y, _rnil] = Just (axRowExtRNilDef, [k, v], > | > RowTy (MkRowTy k v [(x, y)])) matchFamRowExt _ = Nothing > | > > | > ``` > | > > | > I needed an extra `_rnil` case in `matchFamRowExt` because `RowExt > | > "foo" Int RNil` did not match the first pattern match (the dumped list > | > I got was `[Symbol, Type, "foo", Int, RNil]`). Also, is there a better > | > way to conjure up polykinded variables out of the blue or is this > | > fine? I thought about leaving off that info from the type, but then I > | > would have the same question when trying to implement `typeKind` or > | > `tcTypeKind` (for a non-empty row I can use the kinds of the head of > | > the list, for an empty one I would need polykinded variables) > | > > | > My last question is about CoAxiomRules. The note says that they > | > represent "forall as. (r1 ~ r2, s1 ~ s2) => t1 ~ t2", but it is not > | > clear for me in what relation the constraints on the left are with the > | > right. From the typeNats it looks like `t1` is the type family applied > | > to the `_1` arguments and `t2` is the calculated result using the `_2` > | > arguments. Why are we not getting just a list of types as inputs? Is > | > the `_1` always a unification/type variable and not really a type you > | > can use to calculate stuff? Also, if I extrapolate my observations to > | > type families without arguments, I would assume that I do not have > | > constraints on the left hand side as `t1` would be the family appied > | > to the arguments (none) and `t2` would be the calculated result from > | > the `_2` args (I do not need anything to return an empty row). Is this > | > correct or am I horribly wrong? > | > > | > > | > Thanks for your time listening to my questions, maybe I can open a PR > | > with a bit of documentation with eventual answers. > | > > | > Cheers, > | > Jan > | > > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. > | > haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01 > | > %7Csimonpj%40microsoft.com%7C5dd2f56bf336459df67608d72b4c786c%7C72f988 > | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C637025479769572505&sdata=VtrN > | > FMSiRxfGvDIKDjHYc0Jk9NgUgaRuP07OvZ5qpr8%3D&reserved=0 > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C5dd2f56bf336459df67608d7 > | 2b4c786c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637025479769572505&a > | mp;sdata=VtrNFMSiRxfGvDIKDjHYc0Jk9NgUgaRuP07OvZ5qpr8%3D&reserved=0 From simonpj at microsoft.com Wed Aug 28 11:56:15 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 28 Aug 2019 11:56:15 +0000 Subject: Getting a hole's relevant local binds? In-Reply-To: References: Message-ID: how receptive would y'all be to a patch that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. That sounds plausible. But is an unbound-var the only place an editor/IDE tooling might want to get its hands on such a thing? ie would that solve your problem, but not the next person’s? Also note that you could easily build up a list of all the in-scope Ids simply by gathering them from the tree as you walk inwards. There’s no actual need for a new function -- although I can see it might be more convenient. Simon From: ghc-devs On Behalf Of Sandy Maguire Sent: 18 August 2019 01:28 To: ghc-devs at haskell.org Subject: Getting a hole's relevant local binds? Hi all, I'm trying to get my hands on the relevant local binds (as reported by ghc in the presence of a type hole) for editor tooling support. Tracing the code suggests that these things come from the `TcLclEnv`, but afaict, all remnants of `TcLclEnv` are thrown away by the time we get a `TypecheckedModule`. Am I mistaken in this? If not, how receptive would y'all be to a patch that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. This way editors would have an easy means of getting their hand on whatever is in scope at the site of a hole, without resorting to parsing error messages. Cheers, Sandy -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Thu Aug 29 05:34:30 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Wed, 28 Aug 2019 22:34:30 -0700 Subject: Getting a hole's relevant local binds? In-Reply-To: References: Message-ID: I wasn't aware of hole fit plugins! Though I'm not sure they're particularly useful for tooling outside of GHC; it's not clear how I'd install a plugin and then get results out of it into a completely different process. Furthermore, do they only give me hole _fits_? I also need to get my hands on zonked bindings in scope. On Thu, Aug 22, 2019 at 9:40 AM Matthew Pickering < matthewtpickering at gmail.com> wrote: > Are you aware of hole fit plugins Sandy? Do they provide a nice API > for you to use? > > This sounds like a cool and simple change anyway. What happens if you > add this additional information using a source plugin or is that too > late? > > Matt > > On Thu, Aug 22, 2019 at 5:24 PM Sandy Maguire > wrote: > > > > Following up on this, I've hacked in the changes locally, by setting > `XVar GhcTc = [Name, Type]`, and filling it only for `HsVar`s that used to > be `HsUnboundVar`s. The result is remarkable, as it allows for interactive > proof search. I've got a proof of concept here: > https://asciinema.org/a/FZjEIFzDoHBv741QDHfsU5cn8 > > > > I think the possibilities here warrant making the same change in HEAD. > I'd be happy to send an MR if it seems likely to be merged. > > > > Sandy > > > > > > > > On Sat, Aug 17, 2019 at 6:27 PM Sandy Maguire > wrote: > >> > >> Hi all, > >> > >> I'm trying to get my hands on the relevant local binds (as reported by > ghc in the presence of a type hole) for editor tooling support. Tracing the > code suggests that these things come from the `TcLclEnv`, but afaict, all > remnants of `TcLclEnv` are thrown away by the time we get a > `TypecheckedModule`. > >> > >> Am I mistaken in this? If not, how receptive would y'all be to a patch > that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. > This way editors would have an easy means of getting their hand on whatever > is in scope at the site of a hole, without resorting to parsing error > messages. > >> > >> Cheers, > >> Sandy > >> > >> -- > >> I'm currently traveling the world, sleeping on people's couches and > doing full-time collaboration on Haskell projects. If this seems > interesting to you, please consider signing up as a host! > https://isovector.github.io/erdos/ > > > > > > > > -- > > I'm currently traveling the world, sleeping on people's couches and > doing full-time collaboration on Haskell projects. If this seems > interesting to you, please consider signing up as a host! > https://isovector.github.io/erdos/ > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Thu Aug 29 05:36:36 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Wed, 28 Aug 2019 22:36:36 -0700 Subject: Getting a hole's relevant local binds? In-Reply-To: References: Message-ID: Simon, my reasoning here is that holes are the only place GHC will mention relevant bindings. I'd definitely prefer to put all of the relevant local bindings in scope for _every_ HsVar, but that seemed less amenable to being merged :) On Wed, Aug 28, 2019 at 4:56 AM Simon Peyton Jones wrote: > how receptive would y'all be to a patch that puts the `TcLclEnv`, or > something similar inside `XUnboundVar GhcTc`. > > > > That sounds plausible. But is an unbound-var the only place an > editor/IDE tooling might want to get its hands on such a thing? ie would > that solve your problem, but not the next person’s? > > > > Also note that you could easily build up a list of all the in-scope Ids > simply by gathering them from the tree as you walk inwards. There’s no > actual need for a new function -- although I can see it might be more > convenient. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Sandy > Maguire > *Sent:* 18 August 2019 01:28 > *To:* ghc-devs at haskell.org > *Subject:* Getting a hole's relevant local binds? > > > > Hi all, > > > > I'm trying to get my hands on the relevant local binds (as reported by ghc > in the presence of a type hole) for editor tooling support. Tracing the > code suggests that these things come from the `TcLclEnv`, but afaict, all > remnants of `TcLclEnv` are thrown away by the time we get a > `TypecheckedModule`. > > > > Am I mistaken in this? If not, how receptive would y'all be to a patch > that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. > This way editors would have an easy means of getting their hand on whatever > is in scope at the site of a hole, without resorting to parsing error > messages. > > > > Cheers, > > Sandy > > > > -- > > I'm currently traveling the world, sleeping on people's couches and doing > full-time collaboration on Haskell projects. If this seems interesting to > you, please consider signing up as a host! > https://isovector.github.io/erdos/ > -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 29 07:20:11 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 29 Aug 2019 07:20:11 +0000 Subject: Getting a hole's relevant local binds? In-Reply-To: References: Message-ID: Simon, my reasoning here is that holes are the only place GHC will mention relevant bindings. I'd definitely prefer to put all of the relevant local bindings in scope for _every_ HsVar, but that seemed less amenable to being merged :) I didn’t explain myself well enough. No need to merge anything. Your tooling can accumulate these bindings as it walks the tree -- no need for GHC to do anything. Eg myTooling env (HsLam (HsVar x) e) = myTooling (extend env x) e myTooling env = Simon From: Sandy Maguire Sent: 29 August 2019 06:37 To: Simon Peyton Jones Cc: Sandy Maguire ; ghc-devs at haskell.org Subject: Re: Getting a hole's relevant local binds? Simon, my reasoning here is that holes are the only place GHC will mention relevant bindings. I'd definitely prefer to put all of the relevant local bindings in scope for _every_ HsVar, but that seemed less amenable to being merged :) On Wed, Aug 28, 2019 at 4:56 AM Simon Peyton Jones > wrote: how receptive would y'all be to a patch that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. That sounds plausible. But is an unbound-var the only place an editor/IDE tooling might want to get its hands on such a thing? ie would that solve your problem, but not the next person’s? Also note that you could easily build up a list of all the in-scope Ids simply by gathering them from the tree as you walk inwards. There’s no actual need for a new function -- although I can see it might be more convenient. Simon From: ghc-devs > On Behalf Of Sandy Maguire Sent: 18 August 2019 01:28 To: ghc-devs at haskell.org Subject: Getting a hole's relevant local binds? Hi all, I'm trying to get my hands on the relevant local binds (as reported by ghc in the presence of a type hole) for editor tooling support. Tracing the code suggests that these things come from the `TcLclEnv`, but afaict, all remnants of `TcLclEnv` are thrown away by the time we get a `TypecheckedModule`. Am I mistaken in this? If not, how receptive would y'all be to a patch that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. This way editors would have an easy means of getting their hand on whatever is in scope at the site of a hole, without resorting to parsing error messages. Cheers, Sandy -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan at vanbruegge.de Thu Aug 29 12:46:29 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Thu, 29 Aug 2019 14:46:29 +0200 Subject: A few questions about BuiltInSynFamily/CoAxiomRules In-Reply-To: References: <79adfdec-31f7-360a-0637-6bbedf4d0c35@vanbruegge.de> Message-ID: <752bc90c-053e-0e83-517a-8a21801b447b@vanbruegge.de> Ok, I managed to answer those two myself. The type families try to match first unflattened and if that fails recursively flatten. My problem was that I messed up the definition of RNil to only match if it gets no arguments. But it does, it gets the inferred kinds! So after fixing that pattern match everything was working. This also makes the first question irrelevant as I get everything needed. Thanks for your help, especially Iavor! Cheers, Jan Am 28.08.19 um 12:37 schrieb Jan van Brügge: > Thanks, this makes it a lot clearer, it just leaves two questions open: > > Is there a better way to conjure up polykinded unification/type > variables (bonus points if they are not called `k0` and `k1` afterwards)? > > Why are some type families flattened and some are not (for the last > argument of `RowExt` `RNil` is not evaluated, but another `RowExt` is)? > > Thanks for your help, I will definitely add a new Note there. > > Cheers, > Jan > > Am 28.08.19 um 09:52 schrieb Simon Peyton Jones: >> Thanks Iavor for your reply -- all correct I think. Re >> >> | This is the basic idea, but axioms are encoded in a slightly different >> | way---instead of being parameterized by just types, they are >> | parameterized by equalities (the reason for this is what I seem to have >> | forgotten, or perhaps it changed). >> >> you'll find the reason documented in TyCoRep >> Note [Coercion axioms applied to coercions] >> >> There are other useful Notes in that file about axioms. >> >> Jan: if you would care to document some of this in a way that makes sense to you, you could add a new Note. That would help everyone! >> >> Simon >> >> >> | -----Original Message----- >> | From: ghc-devs On Behalf Of Iavor Diatchki >> | Sent: 28 August 2019 01:12 >> | To: Jan van Brügge >> | Cc: ghc-devs at haskell.org >> | Subject: Re: A few questions about BuiltInSynFamily/CoAxiomRules >> | >> | Hello Jan, >> | >> | I think I added these sometime ago, and here is what I recall: >> | >> | * `sfInteractTop` and `sfInteractInert` are to help with type inference: >> | they generate new "derived" constraints, which are used by GHC to >> | instantiate unification variables. >> | - `sfInteractTop` is for facts you can get just by looking at a >> | single constraint. For example, if we see `(x + 5) ~ 8` we can generate a >> | new derived constraint `x ~ 3` >> | - `sfInteractIntert` is for facts that you can get by looking at >> | two constraints together. For example, if we see `(x + a ~ z, x + b ~ x)` >> | we can generate new derived constraint `a ~ b`. >> | - since "derived" constraint do not need evidence, these are just >> | equations. >> | >> | * `sfMatchFun` is used to evaluate built-in type families. For example >> | if we see `5 + 3`, we'd like ghc to reduce this to `8`. >> | - you are correct that the input list are the arguments (e.g., >> | `[5,3]`) >> | - the result is `Just` if we can perform an evaluation step, and the >> | 3-tuple contains: >> | 1. the axiom rule to be used in the evidence (e.g. "AddDef") >> | 2. indexes for the axiom rule (e.g.,"[5,3]") (see below for more >> | info) >> | 3. the result of evaluation (e.g., "8") >> | >> | Part 2 is probably the most confusing, and I think it might have changed a >> | bit since I did it, or perhaps I just forgot some of the details. Either >> | way, this is best explained with >> | an example. The question is "What should be the evidence for `3 + 5 ~ >> | 8`?". >> | >> | In ordinary math one could do a proof by induction, but we don't really >> | represent numbers in the unary notation and we don't have a way to >> | represent inductive proofs in GHC, so instead we decided to have an >> | indexed family of axioms, kind of like this: >> | >> | * AddDef(3,5) : `(3 + 5) ~ 8` >> | * AddDef(2,1) : `(2 + 1) ~ 3` >> | * ... >> | >> | So the types in the second element of the tuple are these indexes that >> | tell you how to instantiate the rule. >> | >> | This is the basic idea, but axioms are encoded in a slightly different >> | way---instead of being parameterized by just types, they are >> | parameterized by equalities (the reason for this is what I seem to have >> | forgotten, or perhaps it changed). >> | So the `CoAxiomRules` actually look like this: >> | >> | * AddDef: (x ~ 3, y ~ 5) => (x + y ~ 8) >> | >> | When we evaluate we always seem to be passing trivial (i.e., "refl") >> | equalities constructed using the second entry in the tuple. For example, >> | if `sfMathcFun` returns `Just (axiom, [t1,t2], result)`, then the result >> | will be `result`, and the evidence that `MyFun args ~ result` will be >> | `axiom (refl @ t1, refl @ t2)` >> | >> | You can see the actual code for this if you look for `sfMatchFun` in >> | `types/FamInstEnv.hs`. >> | >> | I hope this makes sense, but please ask away if it is unclear. And, of >> | course, it would be great to document this properly. >> | >> | -Iavor >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | On Tue, Aug 27, 2019 at 3:57 PM Jan van Brügge wrote: >> | > >> | > Hi lovely people, >> | > >> | > sorry if my recent emails are getting annoying. >> | > >> | > In the last few days I refactored my code to use `BuiltInSynFamily` >> | > and `CoAxiomRule` to replace what was very ad-hoc code. So far so >> | > easy. But I have a few questions due to sparse documentation. >> | > >> | > First, about `BuiltInSynFamily`. It is a record of three functions. >> | > From what I can tell by looking at `TcTypeNats`, the two `interact` >> | > functions are used to solve the argument parts of builtin families >> | > based on known results. `interactTop` seems to simply constraints on >> | > their own, `interactInert` seems to simplify based on being given two >> | > similar contraints. >> | > >> | > By big questions is what exactly `matchFam` does. The argument seems >> | > to be the arguments to the type family, but the tuple in the result is >> | > less clear. The axiom rule is the proof witness, the second argument I >> | > guess is the type arguments you actually care about? What is this used >> | for? >> | > The last one should be the result type. >> | > >> | > Attached to that, what are the garantuees about the list of types that >> | > you get? I assumed at first they were all flattened, but my other type >> | > family is not. I am talking about this piece of code here: >> | > >> | > ``` >> | > matchFamRnil :: [Type] -> Maybe (CoAxiomRule, [Type], Type) >> | > matchFamRnil [] = Just (axRnilDef, [], mkRowTy k v []) >> | > where binders = mkTemplateKindTyConBinders [liftedTypeKind, >> | > liftedTypeKind] >> | > [k, v] = map (mkTyVarTy . binderVar) binders matchFamRnil _ >> | > = Nothing >> | > >> | > >> | > matchFamRowExt :: [Type] -> Maybe (CoAxiomRule, [Type], Type) >> | > matchFamRowExt [_k, _v, x, y, row@(RowTy (MkRowTy k v flds))] = Just >> | > (axRowExtDef, [x, y, row], RowTy (MkRowTy k v $ (x, y):flds)) >> | > matchFamRowExt [k, v, x, y, _rnil] = Just (axRowExtRNilDef, [k, v], >> | > RowTy (MkRowTy k v [(x, y)])) matchFamRowExt _ = Nothing >> | > >> | > ``` >> | > >> | > I needed an extra `_rnil` case in `matchFamRowExt` because `RowExt >> | > "foo" Int RNil` did not match the first pattern match (the dumped list >> | > I got was `[Symbol, Type, "foo", Int, RNil]`). Also, is there a better >> | > way to conjure up polykinded variables out of the blue or is this >> | > fine? I thought about leaving off that info from the type, but then I >> | > would have the same question when trying to implement `typeKind` or >> | > `tcTypeKind` (for a non-empty row I can use the kinds of the head of >> | > the list, for an empty one I would need polykinded variables) >> | > >> | > My last question is about CoAxiomRules. The note says that they >> | > represent "forall as. (r1 ~ r2, s1 ~ s2) => t1 ~ t2", but it is not >> | > clear for me in what relation the constraints on the left are with the >> | > right. From the typeNats it looks like `t1` is the type family applied >> | > to the `_1` arguments and `t2` is the calculated result using the `_2` >> | > arguments. Why are we not getting just a list of types as inputs? Is >> | > the `_1` always a unification/type variable and not really a type you >> | > can use to calculate stuff? Also, if I extrapolate my observations to >> | > type families without arguments, I would assume that I do not have >> | > constraints on the left hand side as `t1` would be the family appied >> | > to the arguments (none) and `t2` would be the calculated result from >> | > the `_2` args (I do not need anything to return an empty row). Is this >> | > correct or am I horribly wrong? >> | > >> | > >> | > Thanks for your time listening to my questions, maybe I can open a PR >> | > with a bit of documentation with eventual answers. >> | > >> | > Cheers, >> | > Jan >> | > >> | > _______________________________________________ >> | > ghc-devs mailing list >> | > ghc-devs at haskell.org >> | > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. >> | > haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01 >> | > %7Csimonpj%40microsoft.com%7C5dd2f56bf336459df67608d72b4c786c%7C72f988 >> | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C637025479769572505&sdata=VtrN >> | > FMSiRxfGvDIKDjHYc0Jk9NgUgaRuP07OvZ5qpr8%3D&reserved=0 >> | _______________________________________________ >> | ghc-devs mailing list >> | ghc-devs at haskell.org >> | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask >> | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- >> | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C5dd2f56bf336459df67608d7 >> | 2b4c786c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637025479769572505&a >> | mp;sdata=VtrNFMSiRxfGvDIKDjHYc0Jk9NgUgaRuP07OvZ5qpr8%3D&reserved=0 From sandy at sandymaguire.me Thu Aug 29 15:22:47 2019 From: sandy at sandymaguire.me (Sandy Maguire) Date: Thu, 29 Aug 2019 08:22:47 -0700 Subject: Getting a hole's relevant local binds? In-Reply-To: References: Message-ID: Fair enough --- though it's annoying that GHC can offer me exactly the information I want in an error message, but then force me to duplicate the logic to find those things for myself! On Thu, Aug 29, 2019 at 12:20 AM Simon Peyton Jones wrote: > Simon, my reasoning here is that holes are the only place GHC will mention > relevant bindings. I'd definitely prefer to put all of the relevant local > bindings in scope for _every_ HsVar, but that seemed less amenable to > being merged :) > > I didn’t explain myself well enough. No need to merge anything. *Your > tooling* can accumulate these bindings as it walks the tree -- no need > for GHC to do anything. Eg > > > > myTooling env (HsLam (HsVar x) e) = myTooling (extend env x) e > > myTooling env = knowing all in-scope binding s in env> > > > > Simon > > > > > > *From:* Sandy Maguire > *Sent:* 29 August 2019 06:37 > *To:* Simon Peyton Jones > *Cc:* Sandy Maguire ; ghc-devs at haskell.org > *Subject:* Re: Getting a hole's relevant local binds? > > > > Simon, my reasoning here is that holes are the only place GHC will mention > relevant bindings. I'd definitely prefer to put all of the relevant local > bindings in scope for _every_ HsVar, but that seemed less amenable to > being merged :) > > > > On Wed, Aug 28, 2019 at 4:56 AM Simon Peyton Jones > wrote: > > how receptive would y'all be to a patch that puts the `TcLclEnv`, or > something similar inside `XUnboundVar GhcTc`. > > > > That sounds plausible. But is an unbound-var the only place an > editor/IDE tooling might want to get its hands on such a thing? ie would > that solve your problem, but not the next person’s? > > > > Also note that you could easily build up a list of all the in-scope Ids > simply by gathering them from the tree as you walk inwards. There’s no > actual need for a new function -- although I can see it might be more > convenient. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Sandy > Maguire > *Sent:* 18 August 2019 01:28 > *To:* ghc-devs at haskell.org > *Subject:* Getting a hole's relevant local binds? > > > > Hi all, > > > > I'm trying to get my hands on the relevant local binds (as reported by ghc > in the presence of a type hole) for editor tooling support. Tracing the > code suggests that these things come from the `TcLclEnv`, but afaict, all > remnants of `TcLclEnv` are thrown away by the time we get a > `TypecheckedModule`. > > > > Am I mistaken in this? If not, how receptive would y'all be to a patch > that puts the `TcLclEnv`, or something similar inside `XUnboundVar GhcTc`. > This way editors would have an easy means of getting their hand on whatever > is in scope at the site of a hole, without resorting to parsing error > messages. > > > > Cheers, > > Sandy > > > > -- > > I'm currently traveling the world, sleeping on people's couches and doing > full-time collaboration on Haskell projects. If this seems interesting to > you, please consider signing up as a host! > https://isovector.github.io/erdos/ > > > > > > -- > > I'm currently traveling the world, sleeping on people's couches and doing > full-time collaboration on Haskell projects. If this seems interesting to > you, please consider signing up as a host! > https://isovector.github.io/erdos/ > > -- I'm currently traveling the world, sleeping on people's couches and doing full-time collaboration on Haskell projects. If this seems interesting to you, please consider signing up as a host! https://isovector.github.io/erdos/ -------------- next part -------------- An HTML attachment was scrubbed... URL: