From simonpj at microsoft.com Fri Jul 1 22:51:20 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 1 Jul 2016 22:51:20 +0000 Subject: Linker.c broken Message-ID: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> Aargh! Windows is broken /again/. Some mess-up in Linker.c. I have not yet tried reverting recent patches. Might someone fix please? It’s really helpful to validate on Windows when making RTS changes. Simon rts\Linker.c: In function 'ghci_find': rts\Linker.c:1482:52: error: error: pointer type mismatch in conditional expression [-Werror] oc->archiveMemberName : oc->fileName); ^ rts\Linker.c:1480:28: error: error: format '%ls' expects argument of type 'wchar_t *', but argument 3 has type 'void *' [-Werror=format=] debugBelch("%p is in %" PATH_FMT, addr, ^ "inplace/bin/ghc-stage1.exe" -optc-fno-stack-protector -optc-Wall -optc-Werror -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-Irts/dist/build/./autogen -optc-Wno-error=inline -optc-O2 -optc-fomit-frame-pointer -optc-g -optc-fno-omit-frame-pointer -optc-g -optc-O0 -optc-DRtsWay=\"rts_debug\" -optc-DWINVER=0x06000100 -static -optc-DDEBUG -ticky -DTICKY_TICKY -O0 -H64m -Wall -fllvm-fill-undef-with-garbage -Werror -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i -irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen -Irts/dist/build/./autogen -O2 -O0 -Wnoncanonical-monad-instances -c rts/RaiseAsync.c -o rts/dist/build/RaiseAsync.debug_o rts\Linker.c:1483:28: error: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'long long unsigned int' [-Werror=format=] debugBelch(", section %d, offset %lx\n", i, ^ In file included from rts\Linker.c:13:0: error: rts\Linker.c: In function 'ocTryLoad': rts\Linker.c:2563:55: error: error: pointer type mismatch in conditional expression [-Werror] oc->archiveMemberName : oc->fileName)); ^ includes\Rts.h:300:53: error: note: in definition of macro 'IF_DEBUG' #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } ^ rts\Linker.c:2561:33: error: error: format '%ls' expects argument of type 'wchar_t *', but argument 2 has type 'void *' [-Werror=format=] IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", ^ includes\Rts.h:300:53: error: note: in definition of macro 'IF_DEBUG' #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } ^ cc1.exe: all warnings being treated as errors `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.debug_o' failed make[1]: *** [rts/dist/build/Linker.debug_o] Error 1 make[1]: *** Waiting for unfinished jobs.... Makefile:129: recipe for target 'all' failed make: *** [all] Error 2 /cygdrive/c/code/HEAD$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Sat Jul 2 00:02:12 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 Jul 2016 20:02:12 -0400 Subject: Linker.c broken In-Reply-To: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <1467417700-sup-5082@sabre> I'm guessing it's: commit 6377757918c1e7f63638d6f258cad8d5f02bb6a7 Author: Simon Marlow Date: Wed Jun 29 21:50:18 2016 +0100 Linker: some extra debugging / logging which added ghci_find. Edward Excerpts from Simon Peyton Jones via ghc-devs's message of 2016-07-01 18:51:20 -0400: > Aargh! Windows is broken /again/. Some mess-up in Linker.c. > I have not yet tried reverting recent patches. Might someone fix please? > It’s really helpful to validate on Windows when making RTS changes. > Simon > > > > rts\Linker.c: In function 'ghci_find': > > > > rts\Linker.c:1482:52: error: > > error: pointer type mismatch in conditional expression [-Werror] > > oc->archiveMemberName : oc->fileName); > > ^ > > > > rts\Linker.c:1480:28: error: > > error: format '%ls' expects argument of type 'wchar_t *', but argument 3 has type 'void *' [-Werror=format=] > > debugBelch("%p is in %" PATH_FMT, addr, > > ^ > > "inplace/bin/ghc-stage1.exe" -optc-fno-stack-protector -optc-Wall -optc-Werror -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-Irts/dist/build/./autogen -optc-Wno-error=inline -optc-O2 -optc-fomit-frame-pointer -optc-g -optc-fno-omit-frame-pointer -optc-g -optc-O0 -optc-DRtsWay=\"rts_debug\" -optc-DWINVER=0x06000100 -static -optc-DDEBUG -ticky -DTICKY_TICKY -O0 -H64m -Wall -fllvm-fill-undef-with-garbage -Werror -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i -irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen -Irts/dist/build/./autogen -O2 -O0 -Wnoncanonical-monad-instances -c rts/RaiseAsync.c -o rts/dist/build/RaiseAsync.debug_o > > > > rts\Linker.c:1483:28: error: > > error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'long long unsigned int' [-Werror=format=] > > debugBelch(", section %d, offset %lx\n", i, > > ^ > > > > In file included from rts\Linker.c:13:0: error: > > rts\Linker.c: In function 'ocTryLoad': > > > > rts\Linker.c:2563:55: error: > > error: pointer type mismatch in conditional expression [-Werror] > > oc->archiveMemberName : oc->fileName)); > > ^ > > > > includes\Rts.h:300:53: error: > > note: in definition of macro 'IF_DEBUG' > > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > > ^ > > > > rts\Linker.c:2561:33: error: > > error: format '%ls' expects argument of type 'wchar_t *', but argument 2 has type 'void *' [-Werror=format=] > > IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", > > ^ > > > > includes\Rts.h:300:53: error: > > note: in definition of macro 'IF_DEBUG' > > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > > ^ > > cc1.exe: all warnings being treated as errors > > `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) > > rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.debug_o' failed > > make[1]: *** [rts/dist/build/Linker.debug_o] Error 1 > > make[1]: *** Waiting for unfinished jobs.... > > Makefile:129: recipe for target 'all' failed > > make: *** [all] Error 2 > > /cygdrive/c/code/HEAD$ From ezyang at mit.edu Sat Jul 2 00:16:45 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 Jul 2016 20:16:45 -0400 Subject: Template Haskell determinism In-Reply-To: References: <7bcbd252616e476b8736549f67f65ade@DB4PR30MB030.064d.mgd.msft.net> <1465146843-sup-5444@sabre> <1467210765-sup-5977@sabre> Message-ID: <1467418021-sup-3725@sabre> Oh drat, that's right, local names don't get given a package key / package id, and externally visible local names aren't given a deterministic name until we tidy (which is too late to help Template Haskell.) So I suppose there is not much we can do here. Edward Excerpts from Michael Sloan's message of 2016-06-29 13:41:13 -0400: > No, NameU and NameL both lack package key / package id. > > -Michael > > On Wed, Jun 29, 2016 at 7:34 AM, Edward Z. Yang wrote: > > No, nameBase is not the right thing to use here; you also need the > > unit ID (in GHC 8.0 parlance; package key in GHC 7.10; package id > > in GHC 7.8 and before). If you have that information, then > > GHC establishes an invariant that if two names compare stably equal, > > then the uniques associated with them are the same. > > > > Edward > > > > Excerpts from Michael Sloan's message of 2016-06-10 17:16:44 -0400: > >> Hey, sorry for not getting back to this sooner! > >> > >> Perhaps I should have added the following to my list of goals in contention: > >> > >> (3) (==) shouldn't yield True for Names that have different unique ids. > >> > >> We can only have stable comparisons if goal (3) isn't met, and two > >> different unique Names would be considered to be equivalent based on the > >> nameBase. This is because Ord is a total order, not a partial order. As > >> described in my prior email, PartialOrd could be added, but it'd be > >> inconvenient to use with existing Ord based containers. > >> > >> -Michael > >> > >> On Sun, Jun 5, 2016 at 10:15 AM, Edward Z. Yang wrote: > >> > >> > I must admit, I am a bit confused by this discussion. > >> > > >> > It is true that every Name is associated with a Unique. But you don't > >> > need the Unique to equality/ordering tests; the names also contain > >> > enough (stable) information for stable comparisons of that sort. So > >> > why don't we expose that instead of the Unique? > >> > > >> > Edward > >> > > >> > Excerpts from Michael Sloan's message of 2016-06-04 18:44:03 -0700: > >> > > On Thu, Jun 2, 2016 at 4:12 AM, Simon Peyton Jones < > >> > simonpj at microsoft.com> > >> > > wrote: > >> > > > >> > > > If names get different ordering keys when reified from different > >> > modules > >> > > > (seems like they'd have to, particularly given ghc's "-j"), then we > >> > end up > >> > > > with an unpleasant circumstance where these do not compare as equal > >> > > > > >> > > > > >> > > > > >> > > > The I believe that global, top level names (NameG) are not subject to > >> > this > >> > > > ordering stuff, so I don’t think this problem can occur. > >> > > > > >> > > > >> > > True, top level names are NameG. The reified Info for a top level Dec > >> > may > >> > > include NameU, though. For example, the type variables in 'Maybe' are > >> > > NameU: > >> > > > >> > > $(do TyConI (DataD _ _ [KindedTV (Name _ nf) _] _ _ _) <- reify ''Maybe > >> > > lift (show nf)) > >> > > > >> > > The resulting expression is something like "NameU 822083586" > >> > > > >> > > > This is a breaking change and it doesn't fix the problem that > >> > NameFlavour > >> > > > is > >> > > > > >> > > > not abstract and leaks the Uniques. It would break at least: > >> > > > > >> > > > > >> > > > > >> > > > But why is NameU exposed to clients? GHC needs to know, but clients > >> > > > don’t. What use are these packages making of it? > >> > > > > >> > > > >> > > It's being leaked in the public inteface via Ord. The Eq instance is > >> > fine, > >> > > because these are Uniques, so the results should be consistent. > >> > > > >> > > There are two goals in contention here: > >> > > > >> > > 1) Having some ordering on Names so that they can be used in Map or Set > >> > > 2) Having law-abiding Eq / Ord instances. We'd need a 'PartialOrd' to > >> > > really handle these well. In that case, the ordering would be based on > >> > > everything but the NameU int, but 'Eq' would still follow it > >> > > > >> > > A few ideas for different approaches to resolving this: > >> > > > >> > > 1) Document it. Less appealing than fixing it in the API, but still > >> > would > >> > > be good. > >> > > > >> > > 2) Remove the 'Ord' instance, and force the user to pick 'NamePartialOrd' > >> > > newtype (partial ord on the non-unique info), or 'UnstableNameOrd' > >> > newtype > >> > > (current behavior). A trickyness of this approach is that you'd need > >> > > containers that can handle (PartialOrd k, Eq k) keys. In lots of cases > >> > > people are using the 'Ord' instance with 'Name's that are not 'NameU', so > >> > > this would break a lot of code that was already deterministic. > >> > > > >> > > 3) Some approaches like this ordering key, but I'm not sure how it will > >> > > help when comparing NameUs from different modules? > >> > > > >> > > > S > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > >> > *Michael > >> > > > Sloan > >> > > > *Sent:* 02 June 2016 02:07 > >> > > > *To:* Bartosz Nitka > >> > > > *Cc:* ghc-devs Devs > >> > > > *Subject:* Re: Template Haskell determinism > >> > > > > >> > > > > >> > > > > >> > > > +1 to solving this. Not sure about the approach, but assuming the > >> > > > following concerns are addressed, I'm (+1) on it too: > >> > > > > >> > > > > >> > > > > >> > > > This solution is clever! However, I think there is some difficulty to > >> > > > determining this ordering key. Namely, what happens when I construct > >> > the > >> > > > (Set Name) using results from multiple reifies? > >> > > > > >> > > > > >> > > > > >> > > > One solution is to have the ordering key be a consecutive supply that's > >> > > > initialized on a per-module basis. There is still an issue there, > >> > though, > >> > > > which is that you might store one of these names in a global IORef > >> > that's > >> > > > used by a later TH splice. Or, similarly, serialize the names to a > >> > file > >> > > > and later load them. At least in those cases you need to use 'runIO' > >> > to > >> > > > break determinism. > >> > > > > >> > > > > >> > > > > >> > > > If names get different ordering keys when reified from different > >> > modules > >> > > > (seems like they'd have to, particularly given ghc's "-j"), then we > >> > end up > >> > > > with an unpleasant circumstance where these do not compare as equal. > >> > How > >> > > > about having the Eq instance ignore the ordering key? I think that > >> > mostly > >> > > > resolves this concern. This implies that the Ord instance should also > >> > > > yield EQ and ignore the ordering key, when the unique key matches. > >> > > > > >> > > > > >> > > > > >> > > > One issue with this is that switching the order of reify could > >> > > > unexpectedly vary the behavior. > >> > > > > >> > > > > >> > > > > >> > > > Does the map in TcGblEnv imply that a reify from a later module will > >> > get > >> > > > the same ordering key? So does this mean that the keys used in a given > >> > > > reify depend on which things have already been reified? In that case, > >> > then > >> > > > this is also an issue with your solution. Now, it's not a big problem > >> > at > >> > > > all, just surprising to the user. > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > If the internal API for Name does change, may as well address > >> > > > https://ghc.haskell.org/trac/ghc/ticket/10311 too. I agree with SPJ's > >> > > > suggested solution of having both the traditional package identifier > >> > and > >> > > > package keys in 'Name'. > >> > > > > >> > > > > >> > > > > >> > > > -Michael > >> > > > > >> > > > > >> > > > > >> > > > On Tue, May 31, 2016 at 6:54 AM, Bartosz Nitka > >> > wrote: > >> > > > > >> > > > Template Haskell with its ability to do arbitrary IO is > >> > non-deterministic > >> > > > by > >> > > > > >> > > > design. You could for example embed the current date in a file. There > >> > is > >> > > > > >> > > > however one kind of non-deterministic behavior that you can trigger > >> > > > > >> > > > accidentally. It has to do with how Names are reified. If you take a > >> > look > >> > > > at > >> > > > > >> > > > the definition of reifyName you can see that it puts the assigned > >> > Unique > >> > > > in a > >> > > > > >> > > > NameU: > >> > > > > >> > > > > >> > > > > >> > > > reifyName :: NamedThing n => n -> TH.Name > >> > > > > >> > > > reifyName thing > >> > > > > >> > > > | isExternalName name = mk_varg pkg_str mod_str occ_str > >> > > > > >> > > > | otherwise = TH.mkNameU occ_str (getKey (getUnique > >> > name)) > >> > > > > >> > > > ... > >> > > > > >> > > > NameFlavour which NameU is a constructor of has a default Ord instance, > >> > > > meaning > >> > > > > >> > > > that it ends up comparing the Uniques. The relative ordering of > >> > Uniques is > >> > > > not > >> > > > > >> > > > guaranteed to be stable across recompilations [1], so this can lead to > >> > > > > >> > > > ABI-incompatible binaries. > >> > > > > >> > > > > >> > > > > >> > > > This isn't an abstract problem and it actually happens in practice. The > >> > > > > >> > > > microlens package keeps Names in a Set and later turns that set into a > >> > > > list. > >> > > > > >> > > > The results have different orders of TyVars resulting in different ABI > >> > > > hashes > >> > > > > >> > > > and can potentially be optimized differently. > >> > > > > >> > > > > >> > > > > >> > > > I believe it's worth to handle this case in a deterministic way and I > >> > have > >> > > > a > >> > > > > >> > > > solution in mind. The idea is to extend NameU (and potentially NameL) > >> > with > >> > > > an > >> > > > > >> > > > ordering key. To be more concrete: > >> > > > > >> > > > > >> > > > > >> > > > - | NameU !Int > >> > > > > >> > > > + | NameU !Int !Int > >> > > > > >> > > > > >> > > > > >> > > > This way the Ord instance can use a stable key and the problem reduces > >> > to > >> > > > > >> > > > ensuring the keys are stable. To generate stable keys we can use the > >> > fact > >> > > > that > >> > > > > >> > > > reify traverses the expressions in the same order every time and > >> > > > sequentially > >> > > > > >> > > > allocate new keys based on traversal order. The way I have it > >> > implemented > >> > > > now > >> > > > > >> > > > is to add a new field in TcGblEnv which maps Uniques to allocated keys: > >> > > > > >> > > > > >> > > > > >> > > > + tcg_th_names :: TcRef (UniqFM Int, Int), > >> > > > > >> > > > > >> > > > > >> > > > Then the reifyName and qNewName do the necessary bookkeeping and > >> > translate > >> > > > the > >> > > > > >> > > > Uniques on the fly. > >> > > > > >> > > > > >> > > > > >> > > > This is a breaking change and it doesn't fix the problem that > >> > NameFlavour > >> > > > is > >> > > > > >> > > > not abstract and leaks the Uniques. It would break at least: > >> > > > > >> > > > > >> > > > > >> > > > - singletons > >> > > > > >> > > > - th-lift > >> > > > > >> > > > - haskell-src-meta > >> > > > > >> > > > - shakespeare > >> > > > > >> > > > - distributed-closure > >> > > > > >> > > > > >> > > > > >> > > > I'd like to get feedback if this is an acceptable solution and if the > >> > > > problem > >> > > > > >> > > > is worth solving. > >> > > > > >> > > > > >> > > > > >> > > > Cheers, > >> > > > > >> > > > Bartosz > >> > > > > >> > > > > >> > > > > >> > > > [1] > >> > > > > >> > https://ghc.haskell.org/trac/ghc/wiki/DeterministicBuilds#NondeterministicUniques > >> > > > > >> > > > > >> > > > _______________________________________________ > >> > > > ghc-devs mailing list > >> > > > ghc-devs at haskell.org > >> > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > > < > >> > https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc-devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c1a4a84c9341546403e1508d38a8246ee%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=mjEDuk%2fuRsDLg0q63zaIBeh5e2IyfKnKjcEcRLDvERE%3d > >> > > > >> > > > > >> > > > > >> > > > > >> > From ezyang at mit.edu Sat Jul 2 04:49:55 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 02 Jul 2016 00:49:55 -0400 Subject: Interruptible exception wormholes kill modularity Message-ID: <1467429537-sup-6217@sabre> In 2010, in the thread "Asynchronous exception wormholes kill modularity" [1], Bas van Dijk observed that 'unblock :: IO a -> IO a' broke modularity, as the sequence of calls 'block . block . unblock $ io' would result in 'io' being run with asynchronous exceptions unblocked, despite the outer 'block' "expecting" that asynchronous exceptions cannot be thrown. I would like to make two claims: 1. The new mask/restore interface is insufficient to "solve" this modularity problem, as *interruptible* operations can still be used to catch asynchronous exceptions. 2. Thus, we should provide an unblock combinator which can be used to catch asynchronous exceptions from a 'mask' (though not an 'uninterruptibleMask')--though it is doubtful if anyone should ever use 'mask' in the first place. Claim 1: Here is some code which reimplements 'unblock': import Control.Exception import Control.Concurrent import Control.Concurrent.MVar unblock :: IO a -> IO a unblock io = do m <- newEmptyMVar _ <- forkIO (io >>= putMVar m) takeMVar m The main idea is that 'takeMVar' is an interruptible operation: when it blocks, the thread can now receive asynchronous exceptions. In general, a thread can unmask exceptions by blocking. Here is a simple test-case: main = do let x = 10000000 -- Just do a bit of work tid <- myThreadId forkIO $ (threadDelay 10000 >> killThread tid) r <- mask $ \restore -> do -- restore $ do -- unblock $ do -- do something non-blocking evaluate (f x []) -- If the exception is delivered in a timely manner, -- shouldn't get here. print r f 0 r = r f n r = f (n-1) (n:r) With both restore and unblock commented, the ThreadKilled exception is delayed; uncommenting either restore or unblock causes the exception to be delivered. This admonition does not apply to uninterruptibleMask, for which there are no interruptible exceptions. Claim 2: Thus, I come to the conclusion that we were wrong to remove 'unblock', and that it is no worse than the ability for interruptible actions to catch asynchronous exceptions. You could very well argue that interruptible actions are a design flaw. Then you should use 'uninterruptibleMask' instead, which effectively removes the concept of interruptibility--and is thus modular. Indeed, Eyal Lotem proposed [2] that 'bracket' should instead use 'uninterruptibleMask', for precisely the reason that it is too easy to reenable asynchronous exceptions in 'mask'. But assuming that interruptible masks are a good idea (Simon Marlow has defended them as "a way avoid reasoning about asynchronous exceptions except at specific points, i.e., where you might block"), there should be an 'unblock' for this type of mask. It should be said that the absence of 'unblock' for 'uninterruptibleMask' only implies that a passed in IO action (e.g., the cleanup action in bracket) does not have access to the exceptions thrown to the current thread; it doesn't actually guarantee uninterruptibility, since the passed in IO action could always raise a normal exception. Haskell's type system is not up to the task of enforcing such invariants. Cheers, Edward [1] https://mail.haskell.org/pipermail/libraries/2010-March/013310.html https://mail.haskell.org/pipermail/libraries/2010-April/013420.html [2] https://mail.haskell.org/pipermail/libraries/2014-September/023675.html P.S. You were CC'ed to this mail because you participated in the original "Asynchronous exception wormholes kill modularity" discussion. P.P.S. I have some speculations about using uninterruptibleMask more frequently: it seems to me that there ought to be a variant of uninterruptibleMask that immediately raises an exception if the "uninterruptible" action blocks. This would probably of great assistance of noticing and eliminating blocking in uninterruptible code. From marlowsd at gmail.com Sat Jul 2 09:58:14 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 2 Jul 2016 10:58:14 +0100 Subject: Interruptible exception wormholes kill modularity In-Reply-To: <1467429537-sup-6217@sabre> References: <1467429537-sup-6217@sabre> Message-ID: Hi Edward, On 2 July 2016 at 05:49, Edward Z. Yang wrote: > In 2010, in the thread "Asynchronous exception wormholes kill modularity" > [1], > Bas van Dijk observed that 'unblock :: IO a -> IO a' broke modularity, > as the sequence of calls 'block . block . unblock $ io' would result in > 'io' being run with asynchronous exceptions unblocked, despite the outer > 'block' "expecting" that asynchronous exceptions cannot be thrown. > > I would like to make two claims: > > 1. The new mask/restore interface is insufficient to "solve" > this modularity problem, as *interruptible* operations can > still be used to catch asynchronous exceptions. > > 2. Thus, we should provide an unblock combinator which > can be used to catch asynchronous exceptions from a 'mask' > (though not an 'uninterruptibleMask')--though it is > doubtful if anyone should ever use 'mask' in the first > place. > > Claim 1: Here is some code which reimplements 'unblock': > > import Control.Exception > import Control.Concurrent > import Control.Concurrent.MVar > > unblock :: IO a -> IO a > unblock io = do > m <- newEmptyMVar > _ <- forkIO (io >>= putMVar m) > takeMVar m > > This isn't really an implementation of unblock, because it doesn't enable fully-asynchronous exceptions inside io. If a stack overflow occurs, it won't be thrown, for example. Also, io will not be interrupted by an asynchronous exception thrown to the current thread. We already have a way to allow asynchronous exceptions to be thrown within a mask, it's called allowInterrupt: http://hackage.haskell.org/package/base-4.9.0.0/docs/Control-Exception.html#v:allowInterrupt I don't buy the claim that this breaks "modularity". The way to think about mask is that it disables fully-asynchronous exceptions, only allowing them to be thrown at certain well-defined points. This makes them tractable, it means you can write code without worrying that an async exception will pop up at any point. Inside a mask, the only way to get back to the state of fully asynchronous exceptions is to use the unblock action that mask gives you (provided you weren't already inside a mask). > The main idea is that 'takeMVar' is an interruptible operation: > when it blocks, the thread can now receive asynchronous exceptions. > In general, a thread can unmask exceptions by blocking. Here > is a simple test-case: > > main = do > let x = 10000000 -- Just do a bit of work > tid <- myThreadId > forkIO $ (threadDelay 10000 >> killThread tid) > r <- mask $ \restore -> do > -- restore $ do > -- unblock $ do > -- do something non-blocking > evaluate (f x []) > -- If the exception is delivered in a timely manner, > -- shouldn't get here. > print r > > f 0 r = r > f n r = f (n-1) (n:r) > > With both restore and unblock commented, the ThreadKilled > exception is delayed; uncommenting either restore or unblock > causes the exception to be delivered. > > This admonition does not apply to uninterruptibleMask, for > which there are no interruptible exceptions. > > Claim 2: Thus, I come to the conclusion that we were wrong > to remove 'unblock', and that it is no worse than the > ability for interruptible actions to catch asynchronous > exceptions. > > I don't think your argument undermines mask. > You could very well argue that interruptible actions are a design flaw. > I disagree - it's impossible to define withMVar without interruptible mask. > Then you should use 'uninterruptibleMask' instead, which effectively > removes the concept of interruptibility--and is thus modular. Indeed, > Eyal Lotem proposed [2] that 'bracket' should instead use > 'uninterruptibleMask', for precisely the reason that it is too easy to > reenable asynchronous exceptions in 'mask'. The problem he was talking about was to do with the interruptibility of the cleanup action in bracket, not the acquire, which really needs interruptible mask. The interruptibility of the cleanup is a complex issue with arguments on both sides. Michael Snoyman recently brought it up again in the context of his safe-exceptions library. We might yet change that - perhaps at the very least we should implement a catchUninterruptible# that behaves like catch# but applies uninterruptibleMask to the handler, and appropriate user-level wrappers. > But assuming that > interruptible masks are a good idea (Simon Marlow has defended them > as "a way avoid reasoning about asynchronous exceptions except > at specific points, i.e., where you might block"), there should > be an 'unblock' for this type of mask. > > It should be said that the absence of 'unblock' for > 'uninterruptibleMask' only implies that a passed in IO action (e.g., the > cleanup action in bracket) does not have access to the exceptions thrown > to the current thread; it doesn't actually guarantee uninterruptibility, > since the passed in IO action could always raise a normal exception. > Haskell's type system is not up to the task of enforcing such > invariants. > > Cheers, > Edward > > [1] > https://mail.haskell.org/pipermail/libraries/2010-March/013310.html > https://mail.haskell.org/pipermail/libraries/2010-April/013420.html > > [2] > https://mail.haskell.org/pipermail/libraries/2014-September/0 > > Cheers, > Simon > > 23675.html > > > P.S. You were CC'ed to this mail because you participated in the original > "Asynchronous exception wormholes kill modularity" discussion. > > P.P.S. I have some speculations about using uninterruptibleMask more > frequently: it seems to me that there ought to be a variant of > uninterruptibleMask that immediately raises an exception if > the "uninterruptible" action blocks. This would probably of > great assistance of noticing and eliminating blocking in > uninterruptible code. > Now that's an interesting idea! Cheers, Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From shumovichy at gmail.com Sat Jul 2 13:06:59 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Sat, 02 Jul 2016 16:06:59 +0300 Subject: Interruptible exception wormholes kill modularity In-Reply-To: <1467429537-sup-6217@sabre> References: <1467429537-sup-6217@sabre> Message-ID: <1467464819.2456.4.camel@gmail.com> On Sat, 2016-07-02 at 00:49 -0400, Edward Z. Yang wrote: > > P.P.S. I have some speculations about using uninterruptibleMask more > frequently: it seems to me that there ought to be a variant of > uninterruptibleMask that immediately raises an exception if > the "uninterruptible" action blocks.  This would probably of > great assistance of noticing and eliminating blocking in > uninterruptible code. Could you please elaborate where it is useful. Any particular example? I'm interested because few years ago I proposed similar function, but in a bit different context. I needed it to make interruptible cleanup actions safe to use. Thanks, Yuras. From ezyang at mit.edu Sat Jul 2 16:25:07 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 02 Jul 2016 12:25:07 -0400 Subject: Interruptible exception wormholes kill modularity In-Reply-To: References: <1467429537-sup-6217@sabre> Message-ID: <1467469617-sup-4243@sabre> Excerpts from Simon Marlow's message of 2016-07-02 05:58:14 -0400: > > Claim 1: Here is some code which reimplements 'unblock': > > > > import Control.Exception > > import Control.Concurrent > > import Control.Concurrent.MVar > > > > unblock :: IO a -> IO a > > unblock io = do > > m <- newEmptyMVar > > _ <- forkIO (io >>= putMVar m) > > takeMVar m > > > > > This isn't really an implementation of unblock, because it doesn't enable > fully-asynchronous exceptions inside io. If a stack overflow occurs, it > won't be thrown, for example. Also, io will not be interrupted by an > asynchronous exception thrown to the current thread. Oh, that's true. I suppose you could work around this by passing on an asynchronous exception to a child thread that is unmasked using forkIOWithUnmask, although maybe you would consider that cheating? > We already have a way to allow asynchronous exceptions to be thrown within > a mask, it's called allowInterrupt: > http://hackage.haskell.org/package/base-4.9.0.0/docs/Control-Exception.html#v:allowInterrupt Well, it's different, right? allowInterrupt allows asynchronous exceptions to be thrown at a specific point of execution; unblock allows asynchronous exceptions to be thrown at any point while the inner IO action is executing. I don't see why you would allow the former without the latter. > I don't buy the claim that this breaks "modularity". The way to think > about mask is that it disables fully-asynchronous exceptions, only allowing > them to be thrown at certain well-defined points. This makes them > tractable, it means you can write code without worrying that an async > exception will pop up at any point. Inside a mask, the only way to get > back to the state of fully asynchronous exceptions is to use the unblock > action that mask gives you (provided you weren't already inside a mask). I suppose what I don't understand, then, is that if interruptible points are modular, I don't see why unblock isn't modular either; it's just a more convenient way of inserting allowInterrupt between every indivisible IO operation in user code. > > You could very well argue that interruptible actions are a design flaw. > > > > I disagree - it's impossible to define withMVar without interruptible mask. What about this version of withMVar using uninterruptible? (Assume no other producers.) withMVarUninterruptible :: MVar a -> (a -> IO b) -> IO b withMVarUninterruptible m io = uninterruptibleMask $ \restore -> do a <- restore (takeMVar m) b <- restore (io a) `onException` putMVar m a putMVar m a return b I don't think it is quite right, as there is race between when takeMVar unblocks, and when the uninterruptible mask is restored. But perhaps the primary utility of interruptible masks is to let you eliminate this race. > > Then you should use 'uninterruptibleMask' instead, which effectively > > removes the concept of interruptibility--and is thus modular. Indeed, > > Eyal Lotem proposed [2] that 'bracket' should instead use > > 'uninterruptibleMask', for precisely the reason that it is too easy to > > reenable asynchronous exceptions in 'mask'. > > > The problem he was talking about was to do with the interruptibility of the > cleanup action in bracket, not the acquire, which really needs > interruptible mask. The interruptibility of the cleanup is a complex issue > with arguments on both sides. Michael Snoyman recently brought it up again > in the context of his safe-exceptions library. We might yet change that - > perhaps at the very least we should implement a catchUninterruptible# that > behaves like catch# but applies uninterruptibleMask to the handler, and > appropriate user-level wrappers. Yes, it is complex, and I won't claim to know the right answer here. > > But assuming that > > interruptible masks are a good idea (Simon Marlow has defended them > > as "a way avoid reasoning about asynchronous exceptions except > > at specific points, i.e., where you might block"), there should > > be an 'unblock' for this type of mask. > > > > It should be said that the absence of 'unblock' for > > 'uninterruptibleMask' only implies that a passed in IO action (e.g., the > > cleanup action in bracket) does not have access to the exceptions thrown > > to the current thread; it doesn't actually guarantee uninterruptibility, > > since the passed in IO action could always raise a normal exception. > > Haskell's type system is not up to the task of enforcing such > > invariants. > > > > Cheers, > > Edward > > > > [1] > > https://mail.haskell.org/pipermail/libraries/2010-March/013310.html > > https://mail.haskell.org/pipermail/libraries/2010-April/013420.html > > > > [2] > > https://mail.haskell.org/pipermail/libraries/2014-September/0 > > > > Cheers, > > Simon > > > > 23675.html > > > > > > P.S. You were CC'ed to this mail because you participated in the original > > "Asynchronous exception wormholes kill modularity" discussion. > > > > P.P.S. I have some speculations about using uninterruptibleMask more > > frequently: it seems to me that there ought to be a variant of > > uninterruptibleMask that immediately raises an exception if > > the "uninterruptible" action blocks. This would probably of > > great assistance of noticing and eliminating blocking in > > uninterruptible code. > > > > Now that's an interesting idea! It is too bad that it is far too difficult to let Haskell-land iterate and try these things out: need RTS cooperation. Edward From ezyang at mit.edu Sat Jul 2 16:29:30 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 02 Jul 2016 12:29:30 -0400 Subject: Interruptible exception wormholes kill modularity In-Reply-To: <1467464819.2456.4.camel@gmail.com> References: <1467429537-sup-6217@sabre> <1467464819.2456.4.camel@gmail.com> Message-ID: <1467476738-sup-805@sabre> Excerpts from Yuras Shumovich's message of 2016-07-02 09:06:59 -0400: > On Sat, 2016-07-02 at 00:49 -0400, Edward Z. Yang wrote: > > > > P.P.S. I have some speculations about using uninterruptibleMask more > > frequently: it seems to me that there ought to be a variant of > > uninterruptibleMask that immediately raises an exception if > > the "uninterruptible" action blocks.  This would probably of > > great assistance of noticing and eliminating blocking in > > uninterruptible code. > > > Could you please elaborate where it is useful. Any particular example? You would use it in any situation you use an uninterruptibleMask. The point is that uninterruptible code is not supposed to take too long (the program is unresponsive in the meantime), so it's fairly bad news if inside uninterruptible code you block. The block = exception variant would help you find out when this occurred. Arguably, it would be more Haskelly if there was a static type discipline for distinguishing blocking and non-blocking IO operations. But some operations are only known to be (non-)blocking at runtime, e.g., takeMVar/putMVar, so a dynamic discipline is necessary. > I'm interested because few years ago I proposed similar function, but > in a bit different context. I needed it to make interruptible cleanup > actions safe to use. Could you elaborate more / post a link? Cheers, Edward From shumovichy at gmail.com Sat Jul 2 18:39:44 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Sat, 02 Jul 2016 21:39:44 +0300 Subject: Interruptible exception wormholes kill modularity In-Reply-To: <1467476738-sup-805@sabre> References: <1467429537-sup-6217@sabre> <1467464819.2456.4.camel@gmail.com> <1467476738-sup-805@sabre> Message-ID: <1467484784.2456.22.camel@gmail.com> On Sat, 2016-07-02 at 12:29 -0400, Edward Z. Yang wrote: > Excerpts from Yuras Shumovich's message of 2016-07-02 09:06:59 -0400: > > On Sat, 2016-07-02 at 00:49 -0400, Edward Z. Yang wrote: > > > > > > P.P.S. I have some speculations about using uninterruptibleMask > > > more > > > frequently: it seems to me that there ought to be a variant of > > > uninterruptibleMask that immediately raises an exception if > > > the "uninterruptible" action blocks.  This would probably of > > > great assistance of noticing and eliminating blocking in > > > uninterruptible code. > > > > > > Could you please elaborate where it is useful. Any particular > > example? > > You would use it in any situation you use an uninterruptibleMask. > The point is that uninterruptible code is not supposed to take > too long (the program is unresponsive in the meantime), so it's > fairly bad news if inside uninterruptible code you block.  The > block = exception variant would help you find out when this occurred. Hmm, ununterruptibleMask is used when the code can block, but you don't want it to throw (async) exception. waitQSem is an example: https://hackage.haskell.org/package/base-4.9.0.0/docs/src/Control.Concurrent.QSem.html#waitQSem Basically, there are cases where code can block, yet you don't want it to be interrupted. Why to you need uninterruptibleMask when the code can't block anyway? It is a no-op in that case. > > Arguably, it would be more Haskelly if there was a static type > discipline for distinguishing blocking and non-blocking IO > operations. > But some operations are only known to be (non-)blocking at runtime, > e.g., takeMVar/putMVar, so a dynamic discipline is necessary. That is correct. In theory it would be useful to encode on type level whether IO operations can block, or can be iterrupted by async exception, or can fail with sync exception. Unfortunately it depends on runtime, so in practice it is less useful. > > > I'm interested because few years ago I proposed similar function, > > but > > in a bit different context. I needed it to make interruptible > > cleanup > > actions safe to use. > > Could you elaborate more / post a link? Sorry, I thought I added the link. I'm talking about this: http://blog.haskell-exists.com/yuras/posts/handling-async-exceptions-in-haskell-pushing-bracket-to-the-limits.html#example-6-pushing-to-the-limits The idea is to disable external async exceptions, but interrupt any interruptable operation on the way. The article describes the reason I need it. Thanks, Yuras. From tkn.akio at gmail.com Mon Jul 4 07:50:55 2016 From: tkn.akio at gmail.com (Akio Takano) Date: Mon, 4 Jul 2016 07:50:55 +0000 Subject: Moving ArgumentsDo forward In-Reply-To: References: Message-ID: Hi Simon, I'm sorry about the late reply. On 2 June 2016 at 07:19, Simon Peyton Jones wrote: > Akio > > Thanks for bringing back the ArgumentsDo question. > > My personal take on it is similar to Bardur: > >> AFAICT at best it's a *very* small improvement[1] and fractures >> Haskell syntax even more around extensions -- tooling etc. will need >> to understand even *more* syntax extensions[2]. > > The benefit to me seems slight. The cost is also modest, but it is not zero (see below), even given a complete implementation. ANY feature carries a cost that is borne by every subsequent implementor, in perpetuity. I understand your concern. I think this extension is worthwhile, but of course this should be ultimately decided on by people who actually maintain GHC. > > So I am a bit reluctant. > > These things are a judgement call, and we don't have a good process for making that decision. A few of us have been talking about putting forward a better process; it'll be a few weeks. > > Meanwhile, what to do about ArgumentDo? You say > > | I disagree that this is a small improvement, but I don't intend to > | debate this here. As you said, nothing has really changed since it was > | discussed before, and a lot of reasons for implementing this extension > | have been already pointed out. I don't have anything to add. > > Is there a wiki page that describes the proposal, and lists the "lot of reasons" why it would be a good thing? And lists any disadvantages? I'm not just erecting obstacles: the trouble with email is that it is long and discursive, so it's really hard to find all the relevant messages, and even if you do each message only makes sense if you read the long sequence. I made a wiki page: https://ghc.haskell.org/trac/ghc/wiki/ArgumentDo > > One question I have is this. Presumably > f do stmts > will be represented as > HsApp (HsVar f) (HsDo ...stmts...) > And should print without parens -- they are signalled by HsPar. So what about > (HsApp (HsVar f) (HsDo ...stmts1..)) (HsDo ..stmts2..) > How does that pretty-print. I suppose it should be > f do stmts1 > do stmts2 > That is, it must use layout. But at the moment the pretty printer doesn't do that. It looks like the pretty printer always prints curly braces around do statements (ppr_do_stmts in hsSyn/HsExpr.hs), so perhaps this is not an issue? - Akio From tkn.akio at gmail.com Mon Jul 4 08:03:08 2016 From: tkn.akio at gmail.com (Akio Takano) Date: Mon, 4 Jul 2016 08:03:08 +0000 Subject: Moving ArgumentsDo forward In-Reply-To: References: Message-ID: Hi Andrew, On 6 June 2016 at 16:37, Andrew Gibiansky wrote: > As the author of the proposal and extension, I'd like to clarify that the > change was abandoned per se because of how controversial the change was. [0] > [1] [2] Thank you for the clarification. I hope you don't mind that I pick up your proposal and use your code as a starting point. > > This is not to say that we should not continue to discuss this change, but > if we do so, make sure that you first read through the previous discussion > -- it was quite extensive! > > Specifically, I became unconvinced that it was worth the effort to make as > an extension, given the reasons against it (mainly, extra work for GHC, > hindent, haskell-src-exts, etc etc); I think this along with a few other > things (trailing commas!) could make a significant improvement to cosmetic > Haskell syntax, but perhaps one extension per character is a bit much for > that. That said I have no idea how else a mythical Haskell' could get a > cleaned up syntax if not through first being implemented as a GHC extension. I actually found the response from people at haskell-cafe rather encouraging. To me a 50% support seems high enough to justify an implementation. > > Finally, you may be interested in ghc-reskin [3], which was a (slightly > tongue-in-cheek) response to a lot of the discussion caused by this > extension last time, and could potentially be made into a production-ready > tool / Haskell' syntax if anyone cared strongly to do so. Thank you. Unfortunately for my uses a separate preprocessor probably would have too much overhead. > > [0] > https://www.reddit.com/r/haskell/comments/447bnw/does_argument_do_have_a_future/ > [1] > https://mail.haskell.org/pipermail/haskell-cafe/2015-September/121217.html > [2] https://ghc.haskell.org/trac/ghc/ticket/10843 > [3] https://github.com/gibiansky/ghc-reskin > > Best, > Andrew > > On Wed, Jun 1, 2016 at 3:26 PM Akio Takano wrote: >> >> Hi Bardur, >> >> On 2 June 2016 at 00:09, Bardur Arantsson wrote: >> > On 06/01/2016 01:48 PM, Akio Takano wrote: >> >> Hi, >> >> >> >> Ticket #10843 [0] proposes an extension, ArgumentsDo, which I would >> >> love to see in GHC. It's a small syntactic extension that allows do, >> >> case, if and lambda blocks as function arguments, without parentheses. >> >> However, its differential revision [1] has been abandoned, citing a >> >> mixed response from the community. A message [2] on the ticket >> >> summarizes a thread in haskell-cafe on this topic. >> >> >> >> I, for one, think adding this extension is worthwhile, because a >> >> significant number of people support it. Also, given how some people >> >> seem to feel ambivalent about this change, I believe actually allowing >> >> people to try it makes it clearer whether it is a good idea. >> >> >> >> Thus I'm wondering: is there any chance that this gets merged? If so, >> >> I'm willing to work on whatever is remaining to get the change merged. >> >> >> > >> > What's changed since it was last discussed? >> >> Nothing has really changed. I'm just trying to argue that the current >> level of community support is good enough to justify an >> implementation. >> >> Please note that the previous Differential revision was abandoned by >> the author. It was *not* rejected due to a lack of support. Hence my >> question: if properly implemented, does this feature have any chance >> of getting merged in, or is it regarded too controversial? >> >> > I don't think the objections >> > were centered in the implementation, so I don't see what "whatever is >> > remaining to get the change merged" would be. >> >> I'm referring the points mentioned in the review comments in the >> Differential revision. For example this change needs an update to the >> User's Guide. >> >> > >> > AFAICT at best it's a *very* small improvement[1] and fractures Haskell >> > syntax even more around extensions -- tooling etc. will need to >> > understand even *more* syntax extensions[2]. >> >> I disagree that this is a small improvement, but I don't intend to >> debate this here. As you said, nothing has really changed since it was >> discussed before, and a lot of reasons for implementing this extension >> have been already pointed out. I don't have anything to add. >> >> Regarding tooling, my understanding is that most tools that need to >> understand Haskell (this includes ghc-mod and hdevtools) use either >> the GHC API or haskell-src-exts, so I don't think this extension would >> need changes in many places. >> >> Regards, >> Takano Akio >> >> > >> > Regards, >> > >> > [1] If you grant that it is indeed an improvment, which I, personally, >> > don't think it is. >> > >> > [2] I think most people agree that this is something that should perhaps >> > be handled by something like >> > https://github.com/haskell/haskell-ide-engine so that it would only need >> > to be implemented once, but there's not even an alpha release yet, so >> > that particular objection stands, AFAICT. >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- > > – Andrew > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From marlowsd at gmail.com Mon Jul 4 08:17:29 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 4 Jul 2016 09:17:29 +0100 Subject: Interruptible exception wormholes kill modularity In-Reply-To: <1467469617-sup-4243@sabre> References: <1467429537-sup-6217@sabre> <1467469617-sup-4243@sabre> Message-ID: On 2 July 2016 at 17:25, Edward Z. Yang wrote: > Excerpts from Simon Marlow's message of 2016-07-02 05:58:14 -0400: > > > Claim 1: Here is some code which reimplements 'unblock': > > > > > > import Control.Exception > > > import Control.Concurrent > > > import Control.Concurrent.MVar > > > > > > unblock :: IO a -> IO a > > > unblock io = do > > > m <- newEmptyMVar > > > _ <- forkIO (io >>= putMVar m) > > > takeMVar m > > > > > > > > This isn't really an implementation of unblock, because it doesn't enable > > fully-asynchronous exceptions inside io. If a stack overflow occurs, it > > won't be thrown, for example. Also, io will not be interrupted by an > > asynchronous exception thrown to the current thread. > > Oh, that's true. I suppose you could work around this by passing > on an asynchronous exception to a child thread that is unmasked > using forkIOWithUnmask, although maybe you would consider that > cheating? > Yes, you can use forkIOWithUnmask as a way to break out of mask. Perhaps for that reason it should have "unsafe" in the name, but I think it's hard to use it by accident. I actually do agree with you that the "modularity" provided by mask isn't really useful. But my reasoning is a bit different. The caller of mask is saying "I want asynchronous exceptions to only occur at known places.". Those known places are interruptible operations, and library code (because we can't know whether library code performs an interruptible operation or not). From the point of view of the caller of mask, they cannot tell the difference between library code that invokes an interruptible operation, and library code that calls "unblock". So it would be perfectly fine to provide an "unblock" that re-enables fully asynchronous exceptions. (indeed I think this was kind of what I had in mind with the original block/unblock, but I didn't articulate the argument clearly enough when everyone was asking for "mask") However, things are a bit different with uninterruptibleMask. Here the caller is saying "I don't expect to see *any* asynchronous exceptions, either in my code or from library code". So clearly an unblock cannot undo an uninterruptibleMask. Having said all this, I don't think the current API is necessarily bad, it just provides more guarantees than we really need, and perhaps it's a bit less efficient than it could be, due to the need to pass the IO action to mask. But we would still need to do this for uninterruptibleMask, and having the API of uninterruptibleMask be the same as mask is good. > We already have a way to allow asynchronous exceptions to be thrown within > > a mask, it's called allowInterrupt: > > > http://hackage.haskell.org/package/base-4.9.0.0/docs/Control-Exception.html#v:allowInterrupt > > Well, it's different, right? allowInterrupt allows asynchronous > exceptions to > be thrown at a specific point of execution; unblock allows asynchronous > exceptions to be thrown at any point while the inner IO action is > executing. I don't see why you would allow the former without the > latter. > Ok, so the point I was trying to make was that the idea of blocking to allow asynchronous exceptions to be thrown inside a mask is fully sanctioned, and we made an API for it. But you're quite right that it's not exactly the same as unblock. > > > You could very well argue that interruptible actions are a design flaw. > > > > > > > I disagree - it's impossible to define withMVar without interruptible > mask. > > What about this version of withMVar using uninterruptible? (Assume > no other producers.) > > withMVarUninterruptible :: MVar a -> (a -> IO b) -> IO b > withMVarUninterruptible m io = > uninterruptibleMask $ \restore -> do > a <- restore (takeMVar m) > b <- restore (io a) `onException` putMVar m a > putMVar m a > return b > > I don't think it is quite right, as there is race between when > takeMVar unblocks, and when the uninterruptible mask is restored. > But perhaps the primary utility of interruptible masks is to > let you eliminate this race. > Exactly! This race condition is the reason for interruptible operations. [snip] > > Edward > Cheers Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jul 4 08:28:37 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Jul 2016 08:28:37 +0000 Subject: Linker.c broken In-Reply-To: <1467417700-sup-5082@sabre> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> Message-ID: That was it. Simon M: would you care to fix? Or should I push a revert? Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Edward Z. Yang | Sent: 02 July 2016 01:02 | To: ghc-devs | Subject: Re: Linker.c broken | | I'm guessing it's: | | commit 6377757918c1e7f63638d6f258cad8d5f02bb6a7 | Author: Simon Marlow | Date: Wed Jun 29 21:50:18 2016 +0100 | | Linker: some extra debugging / logging | | which added ghci_find. | | Edward | | Excerpts from Simon Peyton Jones via ghc-devs's message of 2016-07-01 | 18:51:20 -0400: | > Aargh! Windows is broken /again/. Some mess-up in Linker.c. | > I have not yet tried reverting recent patches. Might someone fix | please? | > It’s really helpful to validate on Windows when making RTS changes. | > Simon | > | > | > | > rts\Linker.c: In function 'ghci_find': | > | > | > | > rts\Linker.c:1482:52: error: | > | > error: pointer type mismatch in conditional expression [- | Werror] | > | > oc->archiveMemberName : oc- | >fileName); | > | > ^ | > | > | > | > rts\Linker.c:1480:28: error: | > | > error: format '%ls' expects argument of type 'wchar_t *', but | argument 3 has type 'void *' [-Werror=format=] | > | > debugBelch("%p is in %" PATH_FMT, addr, | > | > ^ | > | > "inplace/bin/ghc-stage1.exe" -optc-fno-stack-protector -optc-Wall - | optc-Werror -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc- | Wmissing-prototypes -optc-Wmissing-declarations -optc-Winline -optc- | Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc- | Wnested-externs -optc-Wredundant-decls -optc-Iincludes -optc- | Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc- | Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build - | optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc- | Irts/dist/build/./autogen -optc-Wno-error=inline -optc-O2 -optc-fomit- | frame-pointer -optc-g -optc-fno-omit-frame-pointer -optc-g -optc-O0 - | optc-DRtsWay=\"rts_debug\" -optc-DWINVER=0x06000100 -static -optc- | DDEBUG -ticky -DTICKY_TICKY -O0 -H64m -Wall -fllvm-fill-undef-with- | garbage -Werror -Iincludes -Iincludes/dist -Iincludes/dist- | derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts - | Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i - | irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen - | Irts/dist/build/./autogen -O2 -O0 -Wnoncanonical-monad- | instances -c rts/RaiseAsync.c -o rts/dist/build/RaiseAsync.debug_o | > | > | > | > rts\Linker.c:1483:28: error: | > | > error: format '%lx' expects argument of type 'long unsigned | int', but argument 3 has type 'long long unsigned int' [- | Werror=format=] | > | > debugBelch(", section %d, offset %lx\n", i, | > | > ^ | > | > | > | > In file included from rts\Linker.c:13:0: error: | > | > rts\Linker.c: In function 'ocTryLoad': | > | > | > | > rts\Linker.c:2563:55: error: | > | > error: pointer type mismatch in conditional expression [- | Werror] | > | > oc->archiveMemberName : oc- | >fileName)); | > | > ^ | > | > | > | > includes\Rts.h:300:53: error: | > | > note: in definition of macro 'IF_DEBUG' | > | > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } | > | > ^ | > | > | > | > rts\Linker.c:2561:33: error: | > | > error: format '%ls' expects argument of type 'wchar_t *', but | argument 2 has type 'void *' [-Werror=format=] | > | > IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", | > | > ^ | > | > | > | > includes\Rts.h:300:53: error: | > | > note: in definition of macro 'IF_DEBUG' | > | > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } | > | > ^ | > | > cc1.exe: all warnings being treated as errors | > | > `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) | > | > rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.debug_o' | failed | > | > make[1]: *** [rts/dist/build/Linker.debug_o] Error 1 | > | > make[1]: *** Waiting for unfinished jobs.... | > | > Makefile:129: recipe for target 'all' failed | > | > make: *** [all] Error 2 | > | > /cygdrive/c/code/HEAD$ | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.h | askell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc- | devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c5be115baccfd40d | 0352a08d3a20c257d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=ZJFHPuP | zkPfac4Upu3w6r4bSawNwxbV4p%2fr%2f69vqh2o%3d From mle+hs at mega-nerd.com Mon Jul 4 08:34:47 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Mon, 4 Jul 2016 18:34:47 +1000 Subject: Linker.c broken In-Reply-To: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <20160704183447.a59d46507032a6517050e22c@mega-nerd.com> Simon Peyton Jones via ghc-devs wrote: > rts\Linker.c:1480:28: error: > > error: format '%ls' expects argument of type 'wchar_t *', but argument 3 has type 'void *' [-Werror=format=] > > debugBelch("%p is in %" PATH_FMT, addr, I get an error on code from this commit on arm/linux. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From marlowsd at gmail.com Mon Jul 4 08:37:01 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 4 Jul 2016 09:37:01 +0100 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> Message-ID: I will fix it, sorry about this. Unfortunately I can't really add a Windows validate into my workflow because it would mean rebooting my laptop into Windows and not doing anything else for several hours. We need some CI support for Windows - Ben/Austin any thoughts on this? On 4 July 2016 at 09:28, Simon Peyton Jones wrote: > That was it. > > Simon M: would you care to fix? Or should I push a revert? > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Edward Z. Yang > | Sent: 02 July 2016 01:02 > | To: ghc-devs > | Subject: Re: Linker.c broken > | > | I'm guessing it's: > | > | commit 6377757918c1e7f63638d6f258cad8d5f02bb6a7 > | Author: Simon Marlow > | Date: Wed Jun 29 21:50:18 2016 +0100 > | > | Linker: some extra debugging / logging > | > | which added ghci_find. > | > | Edward > | > | Excerpts from Simon Peyton Jones via ghc-devs's message of 2016-07-01 > | 18:51:20 -0400: > | > Aargh! Windows is broken /again/. Some mess-up in Linker.c. > | > I have not yet tried reverting recent patches. Might someone fix > | please? > | > It’s really helpful to validate on Windows when making RTS changes. > | > Simon > | > > | > > | > > | > rts\Linker.c: In function 'ghci_find': > | > > | > > | > > | > rts\Linker.c:1482:52: error: > | > > | > error: pointer type mismatch in conditional expression [- > | Werror] > | > > | > oc->archiveMemberName : oc- > | >fileName); > | > > | > ^ > | > > | > > | > > | > rts\Linker.c:1480:28: error: > | > > | > error: format '%ls' expects argument of type 'wchar_t *', but > | argument 3 has type 'void *' [-Werror=format=] > | > > | > debugBelch("%p is in %" PATH_FMT, addr, > | > > | > ^ > | > > | > "inplace/bin/ghc-stage1.exe" -optc-fno-stack-protector -optc-Wall - > | optc-Werror -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc- > | Wmissing-prototypes -optc-Wmissing-declarations -optc-Winline -optc- > | Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc- > | Wnested-externs -optc-Wredundant-decls -optc-Iincludes -optc- > | Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc- > | Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build - > | optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc- > | Irts/dist/build/./autogen -optc-Wno-error=inline -optc-O2 -optc-fomit- > | frame-pointer -optc-g -optc-fno-omit-frame-pointer -optc-g -optc-O0 - > | optc-DRtsWay=\"rts_debug\" -optc-DWINVER=0x06000100 -static -optc- > | DDEBUG -ticky -DTICKY_TICKY -O0 -H64m -Wall -fllvm-fill-undef-with- > | garbage -Werror -Iincludes -Iincludes/dist -Iincludes/dist- > | derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts - > | Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i - > | irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen - > | Irts/dist/build/./autogen -O2 -O0 -Wnoncanonical-monad- > | instances -c rts/RaiseAsync.c -o rts/dist/build/RaiseAsync.debug_o > | > > | > > | > > | > rts\Linker.c:1483:28: error: > | > > | > error: format '%lx' expects argument of type 'long unsigned > | int', but argument 3 has type 'long long unsigned int' [- > | Werror=format=] > | > > | > debugBelch(", section %d, offset %lx\n", i, > | > > | > ^ > | > > | > > | > > | > In file included from rts\Linker.c:13:0: error: > | > > | > rts\Linker.c: In function 'ocTryLoad': > | > > | > > | > > | > rts\Linker.c:2563:55: error: > | > > | > error: pointer type mismatch in conditional expression [- > | Werror] > | > > | > oc->archiveMemberName : oc- > | >fileName)); > | > > | > ^ > | > > | > > | > > | > includes\Rts.h:300:53: error: > | > > | > note: in definition of macro 'IF_DEBUG' > | > > | > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > | > > | > ^ > | > > | > > | > > | > rts\Linker.c:2561:33: error: > | > > | > error: format '%ls' expects argument of type 'wchar_t *', but > | argument 2 has type 'void *' [-Werror=format=] > | > > | > IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", > | > > | > ^ > | > > | > > | > > | > includes\Rts.h:300:53: error: > | > > | > note: in definition of macro 'IF_DEBUG' > | > > | > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > | > > | > ^ > | > > | > cc1.exe: all warnings being treated as errors > | > > | > `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) > | > > | > rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.debug_o' > | failed > | > > | > make[1]: *** [rts/dist/build/Linker.debug_o] Error 1 > | > > | > make[1]: *** Waiting for unfinished jobs.... > | > > | > Makefile:129: recipe for target 'all' failed > | > > | > make: *** [all] Error 2 > | > > | > /cygdrive/c/code/HEAD$ > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.h > | askell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc- > | devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c5be115baccfd40d > | 0352a08d3a20c257d%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=ZJFHPuP > | zkPfac4Upu3w6r4bSawNwxbV4p%2fr%2f69vqh2o%3d > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Mon Jul 4 08:45:11 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Mon, 4 Jul 2016 18:45:11 +1000 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> Message-ID: <20160704184511.8ace8ea18af4fb8915307eb0@mega-nerd.com> Simon Marlow wrote: > I will fix it, sorry about this. Unfortunately I can't really add a > Windows validate into my workflow because it would mean rebooting my laptop > into Windows and not doing anything else for several hours. Even building as 32 bit would have shaken out a bug in the format specifiers to the debugBelch statement. Are you on Linux? I use a 32 bit debian chroot on my otherwise 64 bit Debian system. > We need some CI support for Windows - Ben/Austin any thoughts on this? That would be an improvement, but it doesn't help for other OSes like Aix and Solaris or other CPUs. I also noticed that patch bypassed Phabricator. I assume that was a mistake. I've done it myself. We need to be particularly careful with the RTS code because its so fragile. It needs to be build and tested on a wide variety of systems. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From marlowsd at gmail.com Mon Jul 4 09:20:11 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 4 Jul 2016 10:20:11 +0100 Subject: Linker.c broken In-Reply-To: <20160704184511.8ace8ea18af4fb8915307eb0@mega-nerd.com> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <20160704184511.8ace8ea18af4fb8915307eb0@mega-nerd.com> Message-ID: On 4 July 2016 at 09:45, Erik de Castro Lopo wrote: > Simon Marlow wrote: > > > I will fix it, sorry about this. Unfortunately I can't really add a > > Windows validate into my workflow because it would mean rebooting my > laptop > > into Windows and not doing anything else for several hours. > > Even building as 32 bit would have shaken out a bug in the format > specifiers > to the debugBelch statement. > > Are you on Linux? I use a 32 bit debian chroot on my otherwise 64 bit > Debian > system. > Building on 32-bit would flush out some bugs, but not others. Yes I could use a chroot, or a VM, and I could have Windows in a VM. But what about OS X? In fact validate on a single platform already ties up my machine for an hour, so the more platforms we have to validate the less practical it is to make small changes. I think more automation is the only good solution to this. So, I rely on CI for most of my commits, whether it's Phabricator (now fixed!) or Travis. > > We need some CI support for Windows - Ben/Austin any thoughts on this? > > That would be an improvement, but it doesn't help for other OSes like Aix > and Solaris or other CPUs. > > I also noticed that patch bypassed Phabricator. I assume that was a > mistake. I've > done it myself. We need to be particularly careful with the RTS code > because > its so fragile. It needs to be build and tested on a wide variety of > systems. > It was actually intentional. The patch validated on Travis: https://travis-ci.org/simonmar/ghc/builds/141572355 and I didn't think it was worth having it reviewed (but if you want to review all linker patches I'd be happy to put them on Phabricator in the future). Cheers Simon > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Mon Jul 4 09:43:09 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Mon, 4 Jul 2016 19:43:09 +1000 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <20160704184511.8ace8ea18af4fb8915307eb0@mega-nerd.com> Message-ID: <20160704194309.43a286709fcdd73b885b0e83@mega-nerd.com> Simon Marlow wrote: > It was actually intentional. The patch validated on Travis: > https://travis-ci.org/simonmar/ghc/builds/141572355 and I didn't think it > was worth having it reviewed (but if you want to review all linker patches > I'd be happy to put them on Phabricator in the future). I *try* (time permitting) to review all linker patches. I've just started a new job (coding Haskell) but it means I've got a bit less time to hack on GHC. I have a Phab rule to notify me on all patches that touch Linker.c. I try to look at all of them, but sometimes they have been accepted by others and committed before I even look at them. For the ones that are nor accepted and committed before I get to them, I often test them on PowerPC or Arm and I'm also willing to keep on doing this (time permitting). Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From ben at well-typed.com Mon Jul 4 10:36:37 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 04 Jul 2016 12:36:37 +0200 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> Message-ID: <87a8hxk7uy.fsf@smart-cactus.org> Simon Marlow writes: > I will fix it, sorry about this. Unfortunately I can't really add a > Windows validate into my workflow because it would mean rebooting my laptop > into Windows and not doing anything else for several hours. We need some > CI support for Windows - Ben/Austin any thoughts on this? > I agree; this would be great. I have a Windows machine which I'd be happy setup as a builder although I'm afraid it's behind NAT, so integration with Harbormaster may require some tunneling. On that note, how is the Harbormaster effort going, Austin? It appears that Differentials are still not being built. Is there anything I can do to help here? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jul 4 10:38:29 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 04 Jul 2016 12:38:29 +0200 Subject: Linker.c broken In-Reply-To: <20160704184511.8ace8ea18af4fb8915307eb0@mega-nerd.com> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <20160704184511.8ace8ea18af4fb8915307eb0@mega-nerd.com> Message-ID: <877fd1k7ru.fsf@smart-cactus.org> Erik de Castro Lopo writes: > Simon Marlow wrote: > >> I will fix it, sorry about this. Unfortunately I can't really add a >> Windows validate into my workflow because it would mean rebooting my laptop >> into Windows and not doing anything else for several hours. > > Even building as 32 bit would have shaken out a bug in the format specifiers > to the debugBelch statement. > > Are you on Linux? I use a 32 bit debian chroot on my otherwise 64 bit Debian > system. > Indeed; and I think it would also be worthwhile also setting up at least a nightly build validating 32-bit Linux. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From mle+hs at mega-nerd.com Mon Jul 4 11:04:49 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Mon, 4 Jul 2016 21:04:49 +1000 Subject: Linker.c broken In-Reply-To: <877fd1k7ru.fsf@smart-cactus.org> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <20160704184511.8ace8ea18af4fb8915307eb0@mega-nerd.com> <877fd1k7ru.fsf@smart-cactus.org> Message-ID: <20160704210449.fed95571156ddd1cfe86ba4c@mega-nerd.com> Ben Gamari wrote: > Indeed; and I think it would also be worthwhile also setting up at least > a nightly build validating 32-bit Linux. If its not integrated with the CI, I'm not sure how useful that is. As you may remember I have a Jenkins instance. Once a day it polls git and if there are new commits sicen the last time it was polled, it builds the following: Arch OS BuildFlavour x86_64 linux perf x86_64 darwin perf x86_64 linux perf-llvm x86_64 linux devel2/unregisterised x86_64 linux devel2 with Clang x86_64 Linux perf powerpc linux devel2 powerpc linux devel2/unregisterised armhf linux devel2 armhf linux devel2/unregisterised as well as cross-compiling from x86_64/linux to armhf/linux and arm64/linux. This does catche the ocassional problem but when something shows up but fixing them is a bigger problem. FOr example, I simply have not had time to look at : https://ghc.haskell.org/trac/ghc/ticket/12238 Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From m at tweag.io Mon Jul 4 11:15:29 2016 From: m at tweag.io (Boespflug, Mathieu) Date: Mon, 4 Jul 2016 13:15:29 +0200 Subject: Linker.c broken In-Reply-To: <87a8hxk7uy.fsf@smart-cactus.org> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> Message-ID: On 4 July 2016 at 12:36, Ben Gamari wrote: > Simon Marlow writes: > >> I will fix it, sorry about this. Unfortunately I can't really add a >> Windows validate into my workflow because it would mean rebooting my laptop >> into Windows and not doing anything else for several hours. We need some >> CI support for Windows - Ben/Austin any thoughts on this? >> > I agree; this would be great. I have a Windows machine which I'd be > happy setup as a builder although I'm afraid it's behind NAT, so > integration with Harbormaster may require some tunneling. Just a suggestion - the easiest and most reliable would probably be to simply use Appveyor for this. They offer a hosted and fully managed CI service very similar to Travis CI - only difference being it runs tests on Windows boxes. And just like Travis CI, it's free! The advantage of a hosted CI service is that no one except Appveyor need to worry about keeping the build bot highly available. Only downside is their machines in the free tier can be a bit slow. But that's a problem that can be iterated on as the need arises. Best, -- Mathieu Boespflug Founder at http://tweag.io. From ben at well-typed.com Mon Jul 4 11:26:10 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 04 Jul 2016 13:26:10 +0200 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> Message-ID: <871t39k5kd.fsf@smart-cactus.org> "Boespflug, Mathieu" writes: > On 4 July 2016 at 12:36, Ben Gamari wrote: >> Simon Marlow writes: >> >>> I will fix it, sorry about this. Unfortunately I can't really add a >>> Windows validate into my workflow because it would mean rebooting my laptop >>> into Windows and not doing anything else for several hours. We need some >>> CI support for Windows - Ben/Austin any thoughts on this? >>> >> I agree; this would be great. I have a Windows machine which I'd be >> happy setup as a builder although I'm afraid it's behind NAT, so >> integration with Harbormaster may require some tunneling. > > Just a suggestion - the easiest and most reliable would probably be to > simply use Appveyor for this. They offer a hosted and fully managed CI > service very similar to Travis CI - only difference being it runs > tests on Windows boxes. And just like Travis CI, it's free! > > The advantage of a hosted CI service is that no one except Appveyor > need to worry about keeping the build bot highly available. > > Only downside is their machines in the free tier can be a bit slow. > But that's a problem that can be iterated on as the need arises. > I've noticed that several of the core libraries rely on Appveyor with good results. However, I had assumed that GHC would exceed the maximum build time of their free tier since the build takes a few hours on my Windows box. It seems that Appveyor has a one-hour build duration limit, similar to Travis. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From m at tweag.io Mon Jul 4 11:45:15 2016 From: m at tweag.io (Boespflug, Mathieu) Date: Mon, 4 Jul 2016 13:45:15 +0200 Subject: Linker.c broken In-Reply-To: <871t39k5kd.fsf@smart-cactus.org> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> Message-ID: True. Worth a try asking them what limits they're willing to lift for a high profile open source project like GHC, I think. -- Mathieu Boespflug Founder at http://tweag.io. On 4 July 2016 at 13:26, Ben Gamari wrote: > "Boespflug, Mathieu" writes: > >> On 4 July 2016 at 12:36, Ben Gamari wrote: >>> Simon Marlow writes: >>> >>>> I will fix it, sorry about this. Unfortunately I can't really add a >>>> Windows validate into my workflow because it would mean rebooting my laptop >>>> into Windows and not doing anything else for several hours. We need some >>>> CI support for Windows - Ben/Austin any thoughts on this? >>>> >>> I agree; this would be great. I have a Windows machine which I'd be >>> happy setup as a builder although I'm afraid it's behind NAT, so >>> integration with Harbormaster may require some tunneling. >> >> Just a suggestion - the easiest and most reliable would probably be to >> simply use Appveyor for this. They offer a hosted and fully managed CI >> service very similar to Travis CI - only difference being it runs >> tests on Windows boxes. And just like Travis CI, it's free! >> >> The advantage of a hosted CI service is that no one except Appveyor >> need to worry about keeping the build bot highly available. >> >> Only downside is their machines in the free tier can be a bit slow. >> But that's a problem that can be iterated on as the need arises. >> > I've noticed that several of the core libraries rely on Appveyor with > good results. However, I had assumed that GHC would exceed the maximum > build time of their free tier since the build takes a few hours on my > Windows box. It seems that Appveyor has a one-hour build duration limit, > similar to Travis. > > Cheers, > > - Ben From lonetiger at gmail.com Mon Jul 4 11:48:51 2016 From: lonetiger at gmail.com (Phyx) Date: Mon, 04 Jul 2016 11:48:51 +0000 Subject: Linker.c broken In-Reply-To: <871t39k5kd.fsf@smart-cactus.org> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> Message-ID: I can build and validate in about an hour myself using 9 jobs on a core i7. If I revert the change in the testsuite preventing parallel runs for Windows. Tamar On Mon, Jul 4, 2016, 12:26 Ben Gamari wrote: > "Boespflug, Mathieu" writes: > > > On 4 July 2016 at 12:36, Ben Gamari wrote: > >> Simon Marlow writes: > >> > >>> I will fix it, sorry about this. Unfortunately I can't really add a > >>> Windows validate into my workflow because it would mean rebooting my > laptop > >>> into Windows and not doing anything else for several hours. We need > some > >>> CI support for Windows - Ben/Austin any thoughts on this? > >>> > >> I agree; this would be great. I have a Windows machine which I'd be > >> happy setup as a builder although I'm afraid it's behind NAT, so > >> integration with Harbormaster may require some tunneling. > > > > Just a suggestion - the easiest and most reliable would probably be to > > simply use Appveyor for this. They offer a hosted and fully managed CI > > service very similar to Travis CI - only difference being it runs > > tests on Windows boxes. And just like Travis CI, it's free! > > > > The advantage of a hosted CI service is that no one except Appveyor > > need to worry about keeping the build bot highly available. > > > > Only downside is their machines in the free tier can be a bit slow. > > But that's a problem that can be iterated on as the need arises. > > > I've noticed that several of the core libraries rely on Appveyor with > good results. However, I had assumed that GHC would exceed the maximum > build time of their free tier since the build takes a few hours on my > Windows box. It seems that Appveyor has a one-hour build duration limit, > similar to Travis. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Mon Jul 4 11:57:53 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Mon, 4 Jul 2016 21:57:53 +1000 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> Message-ID: <20160704215753.a5ab46085579bc34b05a010b@mega-nerd.com> Phyx wrote: > I can build and validate in about an hour myself using 9 jobs on a core i7. > If I revert the change in the testsuite preventing parallel runs for > Windows Oh dear, why is that? Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From marlowsd at gmail.com Mon Jul 4 11:58:04 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 4 Jul 2016 12:58:04 +0100 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> Message-ID: If parallel tests now work on Windows, could it be enabled by default? On 4 July 2016 at 12:48, Phyx wrote: > I can build and validate in about an hour myself using 9 jobs on a core > i7. If I revert the change in the testsuite preventing parallel runs for > Windows. > > Tamar > > On Mon, Jul 4, 2016, 12:26 Ben Gamari wrote: > >> "Boespflug, Mathieu" writes: >> >> > On 4 July 2016 at 12:36, Ben Gamari wrote: >> >> Simon Marlow writes: >> >> >> >>> I will fix it, sorry about this. Unfortunately I can't really add a >> >>> Windows validate into my workflow because it would mean rebooting my >> laptop >> >>> into Windows and not doing anything else for several hours. We need >> some >> >>> CI support for Windows - Ben/Austin any thoughts on this? >> >>> >> >> I agree; this would be great. I have a Windows machine which I'd be >> >> happy setup as a builder although I'm afraid it's behind NAT, so >> >> integration with Harbormaster may require some tunneling. >> > >> > Just a suggestion - the easiest and most reliable would probably be to >> > simply use Appveyor for this. They offer a hosted and fully managed CI >> > service very similar to Travis CI - only difference being it runs >> > tests on Windows boxes. And just like Travis CI, it's free! >> > >> > The advantage of a hosted CI service is that no one except Appveyor >> > need to worry about keeping the build bot highly available. >> > >> > Only downside is their machines in the free tier can be a bit slow. >> > But that's a problem that can be iterated on as the need arises. >> > >> I've noticed that several of the core libraries rely on Appveyor with >> good results. However, I had assumed that GHC would exceed the maximum >> build time of their free tier since the build takes a few hours on my >> Windows box. It seems that Appveyor has a one-hour build duration limit, >> similar to Travis. >> >> Cheers, >> >> - Ben >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Mon Jul 4 12:05:00 2016 From: lonetiger at gmail.com (Phyx) Date: Mon, 04 Jul 2016 12:05:00 +0000 Subject: Linker.c broken In-Reply-To: <20160704215753.a5ab46085579bc34b05a010b@mega-nerd.com> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> <20160704215753.a5ab46085579bc34b05a010b@mega-nerd.com> Message-ID: There used to be a bug in the msys2 runtime which made certain processes hang on exit in a non deterministic way. So the parallel runs was disabled. I was looking into it but haven't been able to reproduce it at all in months now since either upgrading msys2 or Windows (to Windows 10). We have a ticket with more information on it and what I found back then, but haven't been able to progress. It might be that they've just fixed it in the mean time. On Mon, Jul 4, 2016, 12:58 Erik de Castro Lopo wrote: > Phyx wrote: > > > I can build and validate in about an hour myself using 9 jobs on a core > i7. > > If I revert the change in the testsuite preventing parallel runs for > > Windows > > Oh dear, why is that? > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Jul 4 12:12:29 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 04 Jul 2016 14:12:29 +0200 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> Message-ID: <87twg5iouq.fsf@smart-cactus.org> Simon Marlow writes: > If parallel tests now work on Windows, could it be enabled by default? > I seem to remember trying this last summer and ran into trouble. That being said, it would be worth trying again. I've fired up another build; we'll see how it goes. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From simonpj at microsoft.com Mon Jul 4 12:56:18 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Jul 2016 12:56:18 +0000 Subject: Linker.c broken In-Reply-To: References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> <20160704215753.a5ab46085579bc34b05a010b@mega-nerd.com> Message-ID: <5cf490012c794f99a3a38af7a0157347@DB4PR30MB030.064d.mgd.msft.net> Let’s re-enable it. Or: how can I selectively re-enable it for in my validate.mk? From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Phyx Sent: 04 July 2016 13:05 To: Erik de Castro Lopo ; ghc-devs Subject: Re: Linker.c broken There used to be a bug in the msys2 runtime which made certain processes hang on exit in a non deterministic way. So the parallel runs was disabled. I was looking into it but haven't been able to reproduce it at all in months now since either upgrading msys2 or Windows (to Windows 10). We have a ticket with more information on it and what I found back then, but haven't been able to progress. It might be that they've just fixed it in the mean time. On Mon, Jul 4, 2016, 12:58 Erik de Castro Lopo > wrote: Phyx wrote: > I can build and validate in about an hour myself using 9 jobs on a core i7. > If I revert the change in the testsuite preventing parallel runs for > Windows Oh dear, why is that? Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Mon Jul 4 14:34:33 2016 From: lonetiger at gmail.com (Phyx) Date: Mon, 04 Jul 2016 14:34:33 +0000 Subject: Linker.c broken In-Reply-To: <5cf490012c794f99a3a38af7a0157347@DB4PR30MB030.064d.mgd.msft.net> References: <616614ea7cf743f6ba6abb532525c69a@DB4PR30MB030.064d.mgd.msft.net> <1467417700-sup-5082@sabre> <87a8hxk7uy.fsf@smart-cactus.org> <871t39k5kd.fsf@smart-cactus.org> <20160704215753.a5ab46085579bc34b05a010b@mega-nerd.com> <5cf490012c794f99a3a38af7a0157347@DB4PR30MB030.064d.mgd.msft.net> Message-ID: I don't think you can do it in validate.mk, Only https://github.com/ghc/ghc/blob/master/testsuite/driver/runtests.py#L144 here as far as I know. I also constantly have to watch out I don't commit it though On Mon, Jul 4, 2016, 13:56 Simon Peyton Jones wrote: > Let’s re-enable it. Or: how can I selectively re-enable it for in my > validate.mk? > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Phyx > *Sent:* 04 July 2016 13:05 > *To:* Erik de Castro Lopo ; ghc-devs < > ghc-devs at haskell.org> > *Subject:* Re: Linker.c broken > > > > There used to be a bug in the msys2 runtime which made certain processes > hang on exit in a non deterministic way. > > So the parallel runs was disabled. I was looking into it but haven't been > able to reproduce it at all in months now since either upgrading msys2 or > Windows (to Windows 10). > > We have a ticket with more information on it and what I found back then, > but haven't been able to progress. > > It might be that they've just fixed it in the mean time. > > > > On Mon, Jul 4, 2016, 12:58 Erik de Castro Lopo > wrote: > > Phyx wrote: > > > I can build and validate in about an hour myself using 9 jobs on a core > i7. > > If I revert the change in the testsuite preventing parallel runs for > > Windows > > Oh dear, why is that? > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jul 4 22:44:41 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Jul 2016 22:44:41 +0000 Subject: Windows is still broken Message-ID: Simon, Windows is still broken. Do you want me to revert both patches, or will you? Simon "inplace/bin/ghc-stage1.exe" -optc-fno-stack-protector -optc-Wall -optc-Werror -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-Irts/dist/build/./autogen -optc-Wno-error=inline -optc-O2 -optc-fomit-frame-pointer -optc-g -optc-fno-omit-frame-pointer -optc-g -optc-O0 -optc-DRtsWay=\"rts_thr_debug\" -optc-DWINVER=0x06000100 -static -optc-DTHREADED_RTS -optc-DDEBUG -O0 -H64m -Wall -fllvm-fill-undef-with-garbage -Werror -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i -irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen -Irts/dist/build/./autogen -O2 -O0 -Wnoncanonical-monad-instances -c rts/Linker.c -o rts/dist/build/Linker.thr_debug_o In file included from rts\Linker.c:13:0: error: rts\Linker.c: In function 'ocTryLoad': rts\Linker.c:2566:55: error: error: pointer type mismatch in conditional expression [-Werror] oc->archiveMemberName : oc->fileName)); ^ includes\Rts.h:300:53: error: note: in definition of macro 'IF_DEBUG' #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } ^ rts\Linker.c:2564:33: error: error: format '%ls' expects argument of type 'wchar_t *', but argument 2 has type 'void *' [-Werror=format=] IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", ^ includes\Rts.h:300:53: error: note: in definition of macro 'IF_DEBUG' #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } ^ cc1.exe: all warnings being treated as errors `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.debug_o' failed make[1]: *** [rts/dist/build/Linker.debug_o] Error 1 make[1]: *** Waiting for unfinished jobs.... In file included from rts\Linker.c:13:0: error: rts\Linker.c: In function 'ocTryLoad': rts\Linker.c:2566:55: error: error: pointer type mismatch in conditional expression [-Werror] oc->archiveMemberName : oc->fileName)); ^ includes\Rts.h:300:53: error: note: in definition of macro 'IF_DEBUG' #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } ^ rts\Linker.c:2564:33: error: error: format '%ls' expects argument of type 'wchar_t *', but argument 2 has type 'void *' [-Werror=format=] IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", ^ includes\Rts.h:300:53: error: note: in definition of macro 'IF_DEBUG' #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } ^ cc1.exe: all warnings being treated as errors `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.thr_debug_o' failed make[1]: *** [rts/dist/build/Linker.thr_debug_o] Error 1 Makefile:129: recipe for target 'all' failed make: *** [all] Error 2 /cygdrive/c/code/HEAD$ git log -2 rts/Linker.c commit 01f449f4ffd2c4f23bfe5698b9f1b98a86276900 Author: Simon Marlow Date: Mon Jul 4 10:56:04 2016 +0100 Fix 32-bit build failures commit 6377757918c1e7f63638d6f258cad8d5f02bb6a7 Author: Simon Marlow Date: Wed Jun 29 21:50:18 2016 +0100 Linker: some extra debugging / logging -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Tue Jul 5 08:06:59 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Tue, 5 Jul 2016 18:06:59 +1000 Subject: Heads up LLVM 3.8 Message-ID: <20160705180659.8d33174e5e124dccf9d81160@mega-nerd.com> HI all, This is just a heads up that git HEAD will soon be switch to LLVM 3.8. The Phab patch is here: https://phabricator.haskell.org/D2382 I suspect the ghc-8.0 branch will continue to use LLVM 3.7 for all the 8.0 releases. I also suspect that LLVM 3.9 will be released (and that we will switch to it) before the first RC release of 8.2. Cheers, Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From marlowsd at gmail.com Tue Jul 5 11:47:04 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 5 Jul 2016 12:47:04 +0100 Subject: Windows is still broken In-Reply-To: References: Message-ID: Ok, I've reverted those changes. Sorry for the breakage. On 4 July 2016 at 23:44, Simon Peyton Jones wrote: > Simon, Windows is still broken. > > Do you want me to revert both patches, or will you? > > Simon > > > > "inplace/bin/ghc-stage1.exe" -optc-fno-stack-protector -optc-Wall > -optc-Werror -optc-Wall -optc-Wextra -optc-Wstrict-prototypes > -optc-Wmissing-prototypes -optc-Wmissing-declarations -optc-Winline > -optc-Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn > -optc-Wnested-externs -optc-Wredundant-decls -optc-Iincludes > -optc-Iincludes/dist -optc-Iincludes/dist-derivedconstants/header > -optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build > -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common > -optc-Irts/dist/build/./autogen -optc-Wno-error=inline -optc-O2 > -optc-fomit-frame-pointer -optc-g -optc-fno-omit-frame-pointer -optc-g > -optc-O0 -optc-DRtsWay=\"rts_thr_debug\" -optc-DWINVER=0x06000100 -static > -optc-DTHREADED_RTS -optc-DDEBUG -O0 -H64m -Wall > -fllvm-fill-undef-with-garbage -Werror -Iincludes -Iincludes/dist > -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header > -Irts -Irts/dist/build -DCOMPILING_RTS -this-unit-id rts -dcmm-lint -i > -irts -irts/dist/build -Irts/dist/build -irts/dist/build/./autogen > -Irts/dist/build/./autogen -O2 -O0 > -Wnoncanonical-monad-instances -c rts/Linker.c -o > rts/dist/build/Linker.thr_debug_o > > > > In file included from rts\Linker.c:13:0: error: > > rts\Linker.c: In function 'ocTryLoad': > > > > rts\Linker.c:2566:55: error: > > error: pointer type mismatch in conditional expression [-Werror] > > oc->archiveMemberName : > oc->fileName)); > > ^ > > > > includes\Rts.h:300:53: error: > > note: in definition of macro 'IF_DEBUG' > > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > > ^ > > > > rts\Linker.c:2564:33: error: > > error: format '%ls' expects argument of type 'wchar_t *', but > argument 2 has type 'void *' [-Werror=format=] > > IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", > > ^ > > > > includes\Rts.h:300:53: error: > > note: in definition of macro 'IF_DEBUG' > > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > > ^ > > cc1.exe: all warnings being treated as errors > > `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) > > rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.debug_o' failed > > make[1]: *** [rts/dist/build/Linker.debug_o] Error 1 > > make[1]: *** Waiting for unfinished jobs.... > > > > In file included from rts\Linker.c:13:0: error: > > rts\Linker.c: In function 'ocTryLoad': > > > > rts\Linker.c:2566:55: error: > > error: pointer type mismatch in conditional expression [-Werror] > > oc->archiveMemberName : > oc->fileName)); > > ^ > > > > includes\Rts.h:300:53: error: > > note: in definition of macro 'IF_DEBUG' > > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > > ^ > > > > rts\Linker.c:2564:33: error: > > error: format '%ls' expects argument of type 'wchar_t *', but > argument 2 has type 'void *' [-Werror=format=] > > IF_DEBUG(linker, debugBelch("Resolving %" PATH_FMT "\n", > > ^ > > > > includes\Rts.h:300:53: error: > > note: in definition of macro 'IF_DEBUG' > > #define IF_DEBUG(c,s) if (RtsFlags.DebugFlags.c) { s; } > > ^ > > cc1.exe: all warnings being treated as errors > > `gcc.exe' failed in phase `C Compiler'. (Exit code: 1) > > rts/ghc.mk:255: recipe for target 'rts/dist/build/Linker.thr_debug_o' > failed > > make[1]: *** [rts/dist/build/Linker.thr_debug_o] Error 1 > > Makefile:129: recipe for target 'all' failed > > make: *** [all] Error 2 > > /cygdrive/c/code/HEAD$ git log -2 rts/Linker.c > > commit 01f449f4ffd2c4f23bfe5698b9f1b98a86276900 > > Author: Simon Marlow > > Date: Mon Jul 4 10:56:04 2016 +0100 > > > > Fix 32-bit build failures > > > > commit 6377757918c1e7f63638d6f258cad8d5f02bb6a7 > > Author: Simon Marlow > > Date: Wed Jun 29 21:50:18 2016 +0100 > > > > Linker: some extra debugging / logging > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alberto at toscat.net Wed Jul 6 13:00:22 2016 From: alberto at toscat.net (Alberto Valverde) Date: Wed, 6 Jul 2016 15:00:22 +0200 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC Message-ID: Hello, I'm trying to put together a GHC 8.0.1 cross-compiler with Template Haskell support. Initially to target Windows (32bits) from a Linux host but a similar procedure should enable to target other platforms too. I'd like to contribute the patches back so I'm asking for advice on how to implement it in order to increase the chances of them being accepted. I've managed to get a working stage 1 cross-compiler with some patches which correctly builds all stage1 libs and GHC + stage 2 compiler and ghc-iserv.exe. However, compiling TH by using wine and ghc-iserver.exe fails because the file descriptor ids that GHC passes as arguments to "wine ghc-iserv.exe" don't make sense in the emulated windows world. I've hacked around this to test the feasibility of the approach by using stdin/stdout instead of creating new pipes and, surprisingly, managed to cross-compile a simple Template Haskell program. I'm considering using a socket for communicating between both processes as a more permanent solution but this would incur in a dependency on the "network" package. Would this be acceptable? Named pipes have also crossed my mind but I'm not sure how well they're supported by wine. Thanks for your attention. Alberto P.S: Code is in the "cross-mingw-hacks" branches of these repositories and is based on the ghc-8.0.1-release tag: https://github.com/albertov/ghc https://github.com/albertov/hsc2hs A Docker image script here that builds it is here: https://github.com/albertov/ghc-cross-compiler-windows-x86 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Jul 6 17:14:32 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 06 Jul 2016 19:14:32 +0200 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC In-Reply-To: References: Message-ID: <87furmit8n.fsf@smart-cactus.org> Explicitly CCing Simon Marlow to ensure he sees this. Alberto Valverde writes: > Hello, > > I'm trying to put together a GHC 8.0.1 cross-compiler with Template Haskell > support. Initially to target Windows (32bits) from a Linux host but a > similar procedure should enable to target other platforms too. I'd like to > contribute the patches back so I'm asking for advice on how to implement it > in order to increase the chances of them being accepted. > > I've managed to get a working stage 1 cross-compiler with some patches > which correctly builds all stage1 libs and GHC + stage 2 compiler and > ghc-iserv.exe. However, compiling TH by using wine and ghc-iserver.exe > fails because the file descriptor ids that GHC passes as arguments to "wine > ghc-iserv.exe" don't make sense in the emulated windows world. > Ahh, right. Out of curiosity what toolchain are you using to build your stage 1 cross compiler? > I've hacked around this to test the feasibility of the approach by using > stdin/stdout instead of creating new pipes and, surprisingly, managed to > cross-compile a simple Template Haskell program. > Well done! > I'm considering using a socket for communicating between both processes as > a more permanent solution but this would incur in a dependency on the > "network" package. Would this be acceptable? > It would be nice if we could avoid it; GHC depending upon a library has the very unfortunate effect that any user code also needing to link against GHC or one of its libraries now has no choice in the version of that library. We go to some lengths to try to keep the dependency footprint of GHC small for this reason. > Named pipes have also crossed my mind but I'm not sure how well they're > supported by wine. > It would be great if there were some way we could make this work with named pipes. Not only does it side-step the dependency issue, but it feels like the right way forward. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From lonetiger at gmail.com Wed Jul 6 18:05:53 2016 From: lonetiger at gmail.com (Phyx) Date: Wed, 6 Jul 2016 19:05:53 +0100 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC In-Reply-To: References: Message-ID: > However, compiling TH by using wine and ghc-iserver.exe fails because the file descriptor ids that GHC passes as arguments to "wine ghc-iserv.exe" don't make sense in the emulated windows world. Just as a note, On Windows we don't pass FDs (As in the posix file descriptor) to the child process. It passes down Windows File Handles. This requires the Windows process inheritance/security model to be implemented in wine and I know nothing about Wine. But just wanted to clarify. Named pipes would solve your issue, Just don't forget to set the proper ACL on the pipes. On Wed, Jul 6, 2016 at 2:00 PM, Alberto Valverde wrote: > Hello, > > I'm trying to put together a GHC 8.0.1 cross-compiler with Template > Haskell support. Initially to target Windows (32bits) from a Linux host but > a similar procedure should enable to target other platforms too. I'd like > to contribute the patches back so I'm asking for advice on how to implement > it in order to increase the chances of them being accepted. > > I've managed to get a working stage 1 cross-compiler with some patches > which correctly builds all stage1 libs and GHC + stage 2 compiler and > ghc-iserv.exe. However, compiling TH by using wine and ghc-iserver.exe > fails because the file descriptor ids that GHC passes as arguments to "wine > ghc-iserv.exe" don't make sense in the emulated windows world. > > I've hacked around this to test the feasibility of the approach by using > stdin/stdout instead of creating new pipes and, surprisingly, managed to > cross-compile a simple Template Haskell program. > > I'm considering using a socket for communicating between both processes as > a more permanent solution but this would incur in a dependency on the > "network" package. Would this be acceptable? > > Named pipes have also crossed my mind but I'm not sure how well they're > supported by wine. > > Thanks for your attention. > Alberto > > > P.S: > Code is in the "cross-mingw-hacks" branches of these repositories and is > based on the ghc-8.0.1-release tag: > > https://github.com/albertov/ghc > https://github.com/albertov/hsc2hs > > A Docker image script here that builds it is here: > > https://github.com/albertov/ghc-cross-compiler-windows-x86 > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alberto at toscat.net Thu Jul 7 00:01:26 2016 From: alberto at toscat.net (Alberto Valverde) Date: Thu, 7 Jul 2016 02:01:26 +0200 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC In-Reply-To: <87furmit8n.fsf@smart-cactus.org> References: <87furmit8n.fsf@smart-cactus.org> Message-ID: Ben, Phyx, On Wed, Jul 6, 2016 at 7:14 PM, Ben Gamari wrote: > (...) > Ahh, right. Out of curiosity what toolchain are you using to build your > stage 1 cross compiler? > I'm using an MXE (http://mxe.cc) environment in a Debian Jessie Docker image which has gcc-4.9.3. The build env is here: https://github.com/albertov/ghc-cross-compiler-windows-x86 On Wed, Jul 6, 2016 at 7:14 PM, Ben Gamari wrote: > It would be nice if we could avoid it; GHC depending upon a library has > the very unfortunate effect that any user code also needing to link > against GHC or one of its libraries now has no choice in the version of > that library. We go to some lengths to try to keep the dependency > footprint of GHC small for this reason. (...) > > Named pipes have also crossed my mind but I'm not sure how well they're > > supported by wine. > > > It would be great if there were some way we could make this work with > named pipes. Not only does it side-step the dependency issue, but it > feels like the right way forward. > On Wed, Jul 6, 2016 at 8:05 PM, Phyx wrote: > Named pipes would solve your issue, Just don't forget to set the proper > ACL on the pipes. > > It does indeed work! However, I've broken GHC on a Windows host until I implement the equivalent to Posix.createNamedPipe since it seems there's no binding for CreateNamedPipe that I can find in the tree. I'll look into it tomorrow. I've pushed the changes so far to https://github.com/albertov/ghc/tree/cross-external-interpreter. Thanks Alberto > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Thu Jul 7 00:57:18 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Thu, 7 Jul 2016 08:57:18 +0800 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC In-Reply-To: References: <87furmit8n.fsf@smart-cactus.org> Message-ID: <94DD3CCF-7435-4254-A37D-12E6957331F8@lichtzwerge.de> Great work Alberto, I’m in favor of adding some form of network layer, as there are scenarios where you have to run the th compilation process on a different machine. This would be the case for iOS for example. When I toyed with this ~2years ago, trying to port the out of process th solution from ghcjs, I tried to use GHC’s plugin interface, to load the module that would allow ghc to communicate with the runner on the device. This in principle allows to have more dependencies on the plugin and not force them into ghc. At the same time it requires the installation of some additional hooks into the plugin system. I guess one could also come up with a tiny proxy that pretends to be the iserv endpoint and would forward anything over a network layer; this again could probably work outside of ghc. Cheers, Moritz > On Jul 7, 2016, at 8:01 AM, Alberto Valverde wrote: > > Ben, Phyx, > > On Wed, Jul 6, 2016 at 7:14 PM, Ben Gamari wrote: > (...) > Ahh, right. Out of curiosity what toolchain are you using to build your > stage 1 cross compiler? > > I'm using an MXE (http://mxe.cc) environment in a Debian Jessie Docker image which has gcc-4.9.3. The build env is here: https://github.com/albertov/ghc-cross-compiler-windows-x86 > > On Wed, Jul 6, 2016 at 7:14 PM, Ben Gamari wrote: > It would be nice if we could avoid it; GHC depending upon a library has > the very unfortunate effect that any user code also needing to link > against GHC or one of its libraries now has no choice in the version of > that library. We go to some lengths to try to keep the dependency > footprint of GHC small for this reason. > (...) > > Named pipes have also crossed my mind but I'm not sure how well they're > > supported by wine. > > > It would be great if there were some way we could make this work with > named pipes. Not only does it side-step the dependency issue, but it > feels like the right way forward. > > On Wed, Jul 6, 2016 at 8:05 PM, Phyx wrote: > Named pipes would solve your issue, Just don't forget to set the proper ACL on the pipes. > > > It does indeed work! However, I've broken GHC on a Windows host until I implement the equivalent to Posix.createNamedPipe since it seems there's no binding for CreateNamedPipe that I can find in the tree. I'll look into it tomorrow. > > I've pushed the changes so far to https://github.com/albertov/ghc/tree/cross-external-interpreter. > > Thanks > Alberto > > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From marlowsd at gmail.com Thu Jul 7 07:30:40 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 7 Jul 2016 08:30:40 +0100 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC In-Reply-To: <87furmit8n.fsf@smart-cactus.org> References: <87furmit8n.fsf@smart-cactus.org> Message-ID: On 6 July 2016 at 18:14, Ben Gamari wrote: > > Alberto Valverde writes: > > > I've hacked around this to test the feasibility of the approach by using > > stdin/stdout instead of creating new pipes and, surprisingly, managed to > > cross-compile a simple Template Haskell program. > > > Well done! > Indeed, nice hacking :) > > I'm considering using a socket for communicating between both processes > as > > a more permanent solution but this would incur in a dependency on the > > "network" package. Would this be acceptable? > > > It would be nice if we could avoid it; GHC depending upon a library has > the very unfortunate effect that any user code also needing to link > against GHC or one of its libraries now has no choice in the version of > that library. We go to some lengths to try to keep the dependency > footprint of GHC small for this reason. > > > Named pipes have also crossed my mind but I'm not sure how well they're > > supported by wine. > > > It would be great if there were some way we could make this work with > named pipes. Not only does it side-step the dependency issue, but it > feels like the right way forward. > I agree, named pipes are probably a better plan, perhaps a better solution overall than the way we currently pass FD numbers on the command line. Do named pipes work work as expected through wine? We would have to be careful to clean them up again afterwards. Cheers, Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From alberto at toscat.net Thu Jul 7 10:36:29 2016 From: alberto at toscat.net (Alberto Valverde) Date: Thu, 7 Jul 2016 12:36:29 +0200 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC In-Reply-To: <94DD3CCF-7435-4254-A37D-12E6957331F8@lichtzwerge.de> References: <87furmit8n.fsf@smart-cactus.org> <94DD3CCF-7435-4254-A37D-12E6957331F8@lichtzwerge.de> Message-ID: On Thu, Jul 7, 2016 at 2:57 AM, Moritz Angermann wrote: > Great work Alberto, > > Thanks :) > I’m in favor of adding some form of network layer, as there are scenarios > where you have to run the th compilation process on a different machine. > This would be the case for iOS for example. > > When I toyed with this ~2years ago, trying to port the out of process th > solution from ghcjs, I tried to use GHC’s plugin interface, to load the > module that would allow ghc to communicate with the runner on the device. > > This in principle allows to have more dependencies on the plugin and not > force them into ghc. At the same time it requires the installation of some > additional hooks into the plugin system. > > I guess one could also come up with a tiny proxy that pretends to be the > iserv endpoint and would forward anything over a network layer; this again > could probably work outside of ghc. I'm pretty sure this can be done by adding networking capabilities to just iserv without adding networking capabilities to GHC itself. However, I'm not sure if this could cause a linking problem if GHC requests iserv to load a different version of the "network" library than the one it was compiled against. Cheers Alberto -------------- next part -------------- An HTML attachment was scrubbed... URL: From alberto at toscat.net Thu Jul 7 10:37:52 2016 From: alberto at toscat.net (Alberto Valverde) Date: Thu, 7 Jul 2016 12:37:52 +0200 Subject: Cross-compiling Template Haskell via -fexternal-interpreter and IPC In-Reply-To: References: <87furmit8n.fsf@smart-cactus.org> Message-ID: On Thu, Jul 7, 2016 at 9:30 AM, Simon Marlow wrote: > On 6 July 2016 at 18:14, Ben Gamari wrote: > >> It would be great if there were some way we could make this work with >> named pipes. Not only does it side-step the dependency issue, but it >> feels like the right way forward. >> > > I agree, named pipes are probably a better plan, perhaps a better solution > overall than the way we currently pass FD numbers on the command line. Do > named pipes work work as expected through wine? We would have to be > careful to clean them up again afterwards. > I've implemented IPC with named pipes and it appeared to work through wine but now I'm investigating an issue which causes GHC to "freeze" when talking to the external interpreter when it is not running on a TTY (ie: in a build process) Cheers, Alberto -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jul 7 14:53:47 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Jul 2016 14:53:47 +0000 Subject: [commit: ghc] master: Use deterministic maps for FamInstEnv (9858552) In-Reply-To: <20160707144636.B5F4E3A300@ghc.haskell.org> References: <20160707144636.B5F4E3A300@ghc.haskell.org> Message-ID: <45b7ad0d763e4386b745837fa8c15eb2@DB4PR30MB030.064d.mgd.msft.net> | | -type FamInstEnv = UniqFM FamilyInstEnv -- Maps a family to its | instances | +type FamInstEnv = UniqDFM FamilyInstEnv -- Maps a family to its | +instances | -- See Note [FamInstEnv] Bartosz, could you add your comment from the commit (or variant thereof) as a comment to this type definition, so we know WHY you are using UniqDFM here? Thanks Simon We turn FamInstEnvs into lists in some places which don't directly affect the ABI. That happens in family consistency checks and when producing output for `:info`. Unfortunately that nondeterminism is nonlocal and it's hard to tell locally what it affects. Furthermore the envs should be relatively small, so it should be free to use deterministic maps here. Testing with nofib and ./validate detected no difference between UniqFM and UniqDFM. GHC Trac: #4012 From simonpj at microsoft.com Fri Jul 8 07:41:59 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 8 Jul 2016 07:41:59 +0000 Subject: Msys2 64: progress In-Reply-To: References: <211623c64b4d4f7cb1d80e87ae9a84ec@DB4PR30MB030.064d.mgd.msft.net> <5772dbbf.2523c20a.55c3f.08d1@mx.google.com> <5772f9b6.e152c20a.f2c74.32bb@mx.google.com> <5773a464.4ccf1c0a.bc01c.ffffa332@mx.google.com> <9b2d2254b26a4db48e228bc9e8ae67f3@DB4PR30MB030.064d.mgd.msft.net> <5774f07e.83261c0a.f4ff4.14ef@mx.google.com> <7a93e22bd2c84281a56e196ad40a4094@DB4PR30MB030.064d.mgd.msft.net> <15190ff21748433f870dc0c4769b177c@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <1721e234d99848c2aa73e1c86401deaf@DB4PR30MB030.064d.mgd.msft.net> David, Yes, that cleared all those warnings. I have no idea why it works for you but not for me. But I'm rolling, thank you. Simon | -----Original Message----- | From: David Macek [mailto:david.macek.0 at gmail.com] | Sent: 30 June 2016 13:54 | To: Simon Peyton Jones ; lonetiger at gmail.com; | ghc-devs at haskell.org | Subject: Re: Msys2 64: progress | | On 30. 6. 2016 14:38, Simon Peyton Jones via ghc-devs wrote: | > BTW, during ./boot, I get a lot of errors like this. Should I | worry? | | > perl: warning: Setting locale failed. | > perl: warning: Please check that your locale settings: | > LC_ALL = (unset), | > LANG = "ENG" | > are supported and installed on your system. | > perl: warning: Falling back to the standard locale ("C"). | | Weird. My MSYS2 autodetects and sets `LANG=en_US.UTF-8`. Can you try | setting that in the terminal before running `./boot` and or the | testsuite? | | -- | David Macek From simonpj at microsoft.com Fri Jul 8 19:46:33 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 8 Jul 2016 19:46:33 +0000 Subject: [Diffusion] [Committed] rCABAL2863a628f857: Add two local type signatures In-Reply-To: <20160708194243.15219.38308.9C120B47@phabricator.haskell.org> References: <20160708194243.15219.38308.9C120B47@phabricator.haskell.org> Message-ID: <04f725118fe1481ea5188b679d17c44f@AM3PR30MB019.064d.mgd.msft.net> I don't believe I made this commit, or the ones around it. Mysterious; I hope I'm not being impersonated! Simon From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org] Sent: 08 July 2016 20:43 To: Simon Peyton Jones Subject: [Diffusion] [Committed] rCABAL2863a628f857: Add two local type signatures simonpj committed rCABAL2863a628f857: Add two local type signatures (authored by simonpj). Add two local type signatures I'm adding these type signatures to satisfy the "do not generalise local let/where" rule that GHC is taking on. The signatures are clearly correct, but I was surprised at the polymorphism needed. For example parseOptVersion :: ReadP r Version parseOptVersion = parseQuoted ver <++ ver where ver :: ReadP r Version ver = parse <++ return noVersion noVersion = Version{ versionBranch=[], versionTags=[] } Note that 'ver' really is called at two different types! That in turn is because of the type of (<++) (<++) :: ReadP a a -> ReadP r a -> ReadP r a (+++) :: ReadP r a -> ReadP r a -> ReadP r a Note the "a a" in the first arg, which is very unusual. For example, compare the type of (+++). Changing it to match the type of (+++) makes ReadP fail to compile, though, so I assume it's right as it stands. But surely this deserves a comment?! AFFECTED FILES /Distribution/ParseUtils.hs USERS simonpj (Author) COMMIT https://phabricator.haskell.org/rCABAL2863a628f857 EMAIL PREFERENCES https://phabricator.haskell.org/settings/panel/emailpreferences/ To: simonpj -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Fri Jul 8 19:48:56 2016 From: austin at well-typed.com (Austin Seipp) Date: Fri, 8 Jul 2016 14:48:56 -0500 Subject: [Diffusion] [Committed] rCABAL2863a628f857: Add two local type signatures In-Reply-To: <04f725118fe1481ea5188b679d17c44f@AM3PR30MB019.064d.mgd.msft.net> References: <20160708194243.15219.38308.9C120B47@phabricator.haskell.org> <04f725118fe1481ea5188b679d17c44f@AM3PR30MB019.064d.mgd.msft.net> Message-ID: No, this is a Phabricator bug due to some changes in how commit imports are handled. I just upgraded earlier; I'm tracking it down now. At minimum, I'll find a way to turn off the email spam. On Fri, Jul 8, 2016 at 2:46 PM, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > I don’t believe I made this commit, or the ones around it. Mysterious; I > hope I’m not being impersonated! > > > > Simon > > > > *From:* noreply at phabricator.haskell.org [mailto: > noreply at phabricator.haskell.org] > *Sent:* 08 July 2016 20:43 > *To:* Simon Peyton Jones > *Subject:* [Diffusion] [Committed] rCABAL2863a628f857: Add two local type > signatures > > > > simonpj committed rCABAL2863a628f857: Add two local type signatures > (authored by simonpj). > > > > Add two local type signatures > > I'm adding these type signatures to satisfy the "do not generalise > local let/where" rule that GHC is taking on. > > The signatures are clearly correct, but I was surprised at the > polymorphism needed. For example > > parseOptVersion :: ReadP r Version > parseOptVersion = parseQuoted ver <++ ver > > where ver :: ReadP r Version > > ver = parse <++ return noVersion > > noVersion = Version{ versionBranch=[], versionTags=[] } > > Note that 'ver' really is called at two different types! That > in turn is because of the type of (<++) > > (<++) :: ReadP a a -> ReadP r a -> ReadP r a > > (+++) :: ReadP r a -> ReadP r a -> ReadP r a > > Note the "a a" in the first arg, which is very unusual. > For example, compare the type of (+++). > > Changing it to match the type of (+++) makes ReadP fail to compile, > though, so I assume it's right as it stands. But surely this deserves > a comment?! > > > > *AFFECTED FILES* > > /Distribution/ParseUtils.hs > > > > *USERS* > > simonpj (Author) > > > > *COMMIT* > > https://phabricator.haskell.org/rCABAL2863a628f857 > > > > *EMAIL PREFERENCES* > > https://phabricator.haskell.org/settings/panel/emailpreferences/ > > > > *To: *simonpj > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jul 8 21:26:57 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 8 Jul 2016 21:26:57 +0000 Subject: Windows build Message-ID: <0307d62076ae4d5695520241c7532767@AM3PR30MB019.064d.mgd.msft.net> I've completed a successful build on my Surface Book! Thank you. One last glitch. I'm getting the validate failure bellow. No other test requires gcc in my path. GHC itself carefully navigates to its own private gcc. Do we really want this family of tests (half a dozen variants of T11223) to rely on some random gcc, which might or might not be the same as GHC is using? Shouldn't we use 'ghc foo.c'? Simon =====> T11223_simple_unused_duplicate_lib(normal) 3371 of 5211 [0, 6, 1] cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-hFCtmi/test spaces/./rts/T11223/T11223_simple_unused_duplicate_lib.run" && $MAKE -s --no-print-directory t_11223_simple_unused_duplicate_lib Wrong exit code (expected 0 , actual 2 ) Stdout: Stderr: /bin/sh: gcc: command not found make[2]: *** [Makefile:42: t_11223_simple_unused_duplicate_lib] Error 127 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Fri Jul 8 22:02:22 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 8 Jul 2016 23:02:22 +0100 Subject: Windows build In-Reply-To: <0307d62076ae4d5695520241c7532767@AM3PR30MB019.064d.mgd.msft.net> References: <0307d62076ae4d5695520241c7532767@AM3PR30MB019.064d.mgd.msft.net> Message-ID: <578022ee.48371c0a.49d2e.ffffb45d@mx.google.com> Hi Simon, For these tests it shouldn’t matter much so I guess I can change them. The Windows Build guide does ask to put /mingw64/bin/ on your path. The reason I tend not to want to use GHC to compile my c files for the tests is that GHC doesn’t just pass the commands along to gcc. It adds to them. While for these single object files it doesn’t matter (as it just adds a few defines and include paths) It has bitten me multiple times in the past. Particularly when compiling shared libraries for tests since GHC wants to link in the RTS, which makes the files considerably larger and harder to debug so I tend to avoid using GHC to compile just pure C code. It means (with shared libraries) that I’ll end up with multiple RTSs loaded (at least in memory, which means stepping in gdb I have to keep track of the memory locations so I know which one I’m in). So I really wish there was a way to tell GHC not to do this. Also you’ll likely be linking against libraries compiled against other versions of GCC or MSVC so that shouldn’t be an issue. I’ll change the tests tomorrow, but I would much prefer if the testsuite can tell me where the GCC to use is, so I don’t have to use GHC. Regards, Tamar From: Simon Peyton Jones via ghc-devs Sent: Friday, July 8, 2016 22:27 To: ghc-devs at haskell.org Subject: Windows build I’ve completed a successful build on my Surface Book!  Thank you. One last glitch. I’m getting the validate failure bellow. No other test requires gcc in my path.  GHC itself carefully navigates to its own private gcc.   Do we really want this family of tests (half a dozen variants of T11223) to rely on some random gcc, which might or might not be the same as GHC is using?  Shouldn’t we use ‘ghc foo.c’? Simon =====> T11223_simple_unused_duplicate_lib(normal) 3371 of 5211 [0, 6, 1] cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-hFCtmi/test   spaces/./rts/T11223/T11223_simple_unused_duplicate_lib.run" && $MAKE -s --no-print-directory t_11223_simple_unused_duplicate_lib  Wrong exit code (expected 0 , actual 2 ) Stdout: Stderr: /bin/sh: gcc: command not found make[2]: *** [Makefile:42: t_11223_simple_unused_duplicate_lib] Error 127 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.dzyoba at gmail.com Sat Jul 9 12:25:39 2016 From: alex.dzyoba at gmail.com (Alex Dzyoba) Date: Sat, 9 Jul 2016 15:25:39 +0300 Subject: T11758 testcase help needed Message-ID: Hi, all! I was working on #11758, which is about dropping binutils<2.17 hack, and while it was relatively easy to remove the hack itself, I'm not sure how to add a test case for it. As I understand, after removing the aforementioned hack, native codegen now shouldn't generate sign extension. So my question is how to test it? Should it be Cmm file that will be tested with `compile_cmp_asm` like memcpy in "codeGen/should_gen_asm/memcpy.cmm"? Or should I stick to the Haskell test? Thanks, Alex Dzyoba From omeragacan at gmail.com Sat Jul 9 12:55:06 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sat, 9 Jul 2016 12:55:06 +0000 Subject: first unboxed sums patch is ready for reviews Message-ID: Hi all, I'm almost done with the unboxed sums patch and I'd like to get some reviews at this point. https://phabricator.haskell.org/D2259 Two key files in the patch are UnariseStg.hs and RepType.hs. For the example programs see files in testsuite/tests/unboxedsums/ In addition to any comments about the code and documentation, it'd be appreciated if you tell me about some potential uses of unboxed sums, example programs, edge cases etc. so that I can test it a bit more and make sure the generated code is good. Thanks, Omer From ezyang at mit.edu Sat Jul 9 13:22:40 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 09 Jul 2016 09:22:40 -0400 Subject: T11758 testcase help needed In-Reply-To: References: Message-ID: <1468070520-sup-6794@sabre> I am not sure if this will work, but how about dumping the assembly and looking for sign extension? C-- might be easier! Excerpts from Alex Dzyoba's message of 2016-07-09 08:25:39 -0400: > Hi, all! > > I was working on #11758, which is about dropping binutils<2.17 hack, and while > it was relatively easy to remove the hack itself, I'm not sure how to add a > test case for it. > > As I understand, after removing the aforementioned hack, native codegen now > shouldn't generate sign extension. So my question is how to test it? Should it > be Cmm file that will be tested with `compile_cmp_asm` like memcpy in > "codeGen/should_gen_asm/memcpy.cmm"? Or should I stick to > the Haskell test? > > Thanks, > Alex Dzyoba From lonetiger at gmail.com Sat Jul 9 17:00:21 2016 From: lonetiger at gmail.com (Phyx) Date: Sat, 9 Jul 2016 18:00:21 +0100 Subject: Windows build In-Reply-To: <578022ee.48371c0a.49d2e.ffffb45d@mx.google.com> References: <0307d62076ae4d5695520241c7532767@AM3PR30MB019.064d.mgd.msft.net> <578022ee.48371c0a.49d2e.ffffb45d@mx.google.com> Message-ID: Hi Simon, Thomie changed it so the in place gcc can be called from the testsuite. The tests should pass now. Kind regards, Tamar Sent from my Mobile On Jul 8, 2016 23:02, wrote: > Hi Simon, > > > > For these tests it shouldn’t matter much so I guess I can change them. > > > > The Windows Build guide does ask to put /mingw64/bin/ on your path. > > The reason I tend not to want to use GHC to compile my c files for the > tests is that GHC doesn’t just pass the commands along to gcc. > > It adds to them. While for these single object files it doesn’t matter (as > it just adds a few defines and include paths) It has bitten me multiple > times in the past. Particularly when compiling shared libraries for tests > since GHC wants to link in the RTS, which makes the files considerably > larger and harder to debug so > > I tend to avoid using GHC to compile just pure C code. > > > > It means (with shared libraries) that I’ll end up with multiple RTSs > loaded (at least in memory, which means stepping in gdb I have to keep > track of the memory locations so I know which one I’m in). So I really wish > there was a way to tell GHC not to do this. > > > > Also you’ll likely be linking against libraries compiled against other > versions of GCC or MSVC so that shouldn’t be an issue. > > > > I’ll change the tests tomorrow, but I would much prefer if the testsuite > can tell me where the GCC to use is, so I don’t have to use GHC. > > > > Regards, > > Tamar > > > > *From: *Simon Peyton Jones via ghc-devs > *Sent: *Friday, July 8, 2016 22:27 > *To: *ghc-devs at haskell.org > *Subject: *Windows build > > > > I’ve completed a successful build on my Surface Book! Thank you. > > > > One last glitch. I’m getting the validate failure bellow. > > > > No other test requires gcc in my path. GHC itself carefully navigates to > its own private gcc. Do we really want this family of tests (half a dozen > variants of T11223) to rely on some random gcc, which might or might not be > the same as GHC is using? Shouldn’t we use ‘ghc foo.c’? > > > > Simon > > > > =====> T11223_simple_unused_duplicate_lib(normal) 3371 of 5211 [0, 6, 1] > > cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-hFCtmi/test > spaces/./rts/T11223/T11223_simple_unused_duplicate_lib.run" && $MAKE -s > --no-print-directory t_11223_simple_unused_duplicate_lib > > Wrong exit code (expected 0 , actual 2 ) > > Stdout: > > > > Stderr: > > /bin/sh: gcc: command not found > > make[2]: *** [Makefile:42: t_11223_simple_unused_duplicate_lib] Error 127 > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sat Jul 9 20:45:35 2016 From: ben at well-typed.com (Ben Gamari) Date: Sat, 09 Jul 2016 22:45:35 +0200 Subject: Rethinking GHC's approach to managing proposals Message-ID: <87shvilevk.fsf@smart-cactus.org> Hello everyone, Recently there has been a fair bit of discussion[1,2] around the mechanisms by which proposed changes to GHC are evaluated. While we have something of a formal proposal protocol [3], it is not clearly documented, inconsistently applied, and may be failing to serve a significant fraction of GHC's potential contributor pool. Over the last few weeks, I have been doing a fair amount of reading, thinking, and discussing to try to piece together a proposal scheme which better serves our community. The resulting proposal [4] is strongly inspired by the RFC process in place in the Rust community [5], the leaders of which have thought quite hard about fostering community growth and participation. While no process is perfect, I feel like the Rust process is a good starting point for discussion, offering enough structure to guide new contributors through the process while requiring only a modest investment of developer time. To get a sense for how well this will work in our community, I propose that we attempt to self-host the proposed process. To this end I have setup a ghc-proposals repository [6] and opened a pull request for discussion of the process proposal [4]. Let's see how this goes. Cheers, - Ben [1] https://www.reddit.com/r/haskell/comments/4oyxo2/blog_contributing_to_ghc/ [2] https://www.reddit.com/r/haskell/comments/4isua9/ghc_development_outsidein/ [3] https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/AddingFeatures [4] https://github.com/ghc-proposals/ghc-proposals/pull/1/files?short_path=14d66cd#diff-14d66cda32248456a5f223b6333c6132 [5] https://github.com/rust-lang/rfcs [6] https://github.com/ghc-proposals/ghc-proposals -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From omeragacan at gmail.com Sun Jul 10 09:16:05 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sun, 10 Jul 2016 09:16:05 +0000 Subject: testsuite: Test named X but want to use Y.hs as source Message-ID: I have some number of test programs that I compile and run as usual. I also want to run them using GHCi, with -fobject-code. So I tried this: def just_ghci( name, opts ): opts.only_ways = ['ghci'] test('unboxedsums1.ghci', just_ghci, compile_and_run, ['-fobject-code']) Now, I don't have a file named `unboxedsums1.ghci.hs`, I want to use `unboxedsums1.hs` and I already have a test named `unboxedsums1`. Any ideas how to do this? Thanks From ben at well-typed.com Mon Jul 11 17:03:33 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 11 Jul 2016 19:03:33 +0200 Subject: What fixed this? Message-ID: <87mvlo5cpm.fsf@smart-cactus.org> Does anyone know which commit fixed the testcase in #12381 on master? This testcase currently fails on 8.0.1 and it would be nice if we could merge the fix for 8.0.2. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From thomasmiedema at gmail.com Mon Jul 11 21:08:05 2016 From: thomasmiedema at gmail.com (Thomas Miedema) Date: Mon, 11 Jul 2016 23:08:05 +0200 Subject: T11758 testcase help needed In-Reply-To: <1468070520-sup-6794@sabre> References: <1468070520-sup-6794@sabre> Message-ID: Hi Alex, You're deleting hacks that were added for ancient version of binutils (added in 14a5aadb84c34dbe2bee129ed80fdfa1fb12e3e0 in 2005 and b8a64b8ec9cd3d8f6e3f23e44312c4903eccac45 in 2007). I think that if you submit your patch without a test, there's a good chance it will get accepted. Thomas On Sat, Jul 9, 2016 at 3:22 PM, Edward Z. Yang wrote: > I am not sure if this will work, but how about dumping the assembly and > looking for sign extension? C-- might be easier! > > Excerpts from Alex Dzyoba's message of 2016-07-09 08:25:39 -0400: > > Hi, all! > > > > I was working on #11758, which is about dropping binutils<2.17 hack, and > while > > it was relatively easy to remove the hack itself, I'm not sure how to > add a > > test case for it. > > > > As I understand, after removing the aforementioned hack, native codegen > now > > shouldn't generate sign extension. So my question is how to test it? > Should it > > be Cmm file that will be tested with `compile_cmp_asm` like memcpy in > > "codeGen/should_gen_asm/memcpy.cmm"? Or should I stick to > > the Haskell test? > > > > Thanks, > > Alex Dzyoba > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jul 11 21:36:27 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 11 Jul 2016 21:36:27 +0000 Subject: Rethinking GHC's approach to managing proposals In-Reply-To: <87shvilevk.fsf@smart-cactus.org> References: <87shvilevk.fsf@smart-cactus.org> Message-ID: Just to be clear: * We are actively seeking feedback about the proposal [4] below. It's not a fait-accompli. * You can join the dialogue by (a) replying to this email, (b) via the "Conversations" tab of [4], namely https://github.com/ghc-proposals/ghc-proposals/pull/1 Doubtless via reddit too! If you don't like something, the more specific and concrete you can be about a better alternative, the better. E.g. Richard's comments on the "conversations" tab both ask questions and propose answers. Bravo! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben | Gamari | Sent: 09 July 2016 21:46 | To: GHC developers ; ghc-users | Subject: Rethinking GHC's approach to managing proposals | | Hello everyone, | | Recently there has been a fair bit of discussion[1,2] around the | mechanisms by which proposed changes to GHC are evaluated. While we have | something of a formal proposal protocol [3], it is not clearly | documented, inconsistently applied, and may be failing to serve a | significant fraction of GHC's potential contributor pool. | | Over the last few weeks, I have been doing a fair amount of reading, | thinking, and discussing to try to piece together a proposal scheme | which better serves our community. | | The resulting proposal [4] is strongly inspired by the RFC process in | place in the Rust community [5], the leaders of which have thought quite | hard about fostering community growth and participation. While no | process is perfect, I feel like the Rust process is a good starting | point for discussion, offering enough structure to guide new | contributors through the process while requiring only a modest | investment of developer time. | | To get a sense for how well this will work in our community, I propose | that we attempt to self-host the proposed process. To this end I have | setup a ghc-proposals repository [6] and opened a pull request for | discussion of the process proposal [4]. | | Let's see how this goes. | | Cheers, | | - Ben | | | [1] | https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fwww.red | dit.com%2fr%2fhaskell%2fcomments%2f4oyxo2%2fblog_contributing_to_ghc%2f& | data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c99735311c5f64cac6a6608 | d3a83a032a%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=Hl6GqRWfu7IOQtpE | jpfsNAkv3mmLgNKm2ciQDoMe6HA%3d | [2] | https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fwww.red | dit.com%2fr%2fhaskell%2fcomments%2f4isua9%2fghc_development_outsidein%2f | &data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c99735311c5f64cac6a660 | 8d3a83a032a%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=bj2AQqQirX3X%2f | 4%2fFr05eXFuD4yW0r9Nmrmdg7IGEF%2f8%3d | [3] | https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/AddingFeatures | [4] https://github.com/ghc-proposals/ghc- | proposals/pull/1/files?short_path=14d66cd#diff- | 14d66cda32248456a5f223b6333c6132 | [5] https://github.com/rust-lang/rfcs | [6] https://github.com/ghc-proposals/ghc-proposals From iavor.diatchki at gmail.com Mon Jul 11 22:00:46 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Mon, 11 Jul 2016 15:00:46 -0700 Subject: Rethinking GHC's approach to managing proposals In-Reply-To: References: <87shvilevk.fsf@smart-cactus.org> Message-ID: Hello, I think this sounds fairly reasonable, but it is hard to say how well it will work in practice until we try it. Some clarifying questions on the intended process: 1. After submitting the initial merge request, is the person making the proposal to wait for any kind of acknowledgment, or just move on to step 2? 2. Is the discussion going to happen on one of the mailing lists, if so which? Is it the job of the proposing person to involve/notify the committee about the discussion? If so, how are they to find out who is on the committee? 3. How does one actually perform step 3, another pull request or simply an e-mail to someone? Typo: two separate bullets in the proposal are labelled as 4. Cheers, -Iavor On Mon, Jul 11, 2016 at 2:36 PM, Simon Peyton Jones via Glasgow-haskell-users wrote: > Just to be clear: > > * We are actively seeking feedback about the proposal [4] below. > It's not a fait-accompli. > > * You can join the dialogue by (a) replying to this email, > (b) via the "Conversations" tab of [4], namely > https://github.com/ghc-proposals/ghc-proposals/pull/1 > Doubtless via reddit too! > > If you don't like something, the more specific and concrete you > can be about a better alternative, the better. E.g. Richard's > comments on the "conversations" tab both ask questions and propose > answers. Bravo! > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben > | Gamari > | Sent: 09 July 2016 21:46 > | To: GHC developers ; ghc-users | users at haskell.org> > | Subject: Rethinking GHC's approach to managing proposals > | > | Hello everyone, > | > | Recently there has been a fair bit of discussion[1,2] around the > | mechanisms by which proposed changes to GHC are evaluated. While we have > | something of a formal proposal protocol [3], it is not clearly > | documented, inconsistently applied, and may be failing to serve a > | significant fraction of GHC's potential contributor pool. > | > | Over the last few weeks, I have been doing a fair amount of reading, > | thinking, and discussing to try to piece together a proposal scheme > | which better serves our community. > | > | The resulting proposal [4] is strongly inspired by the RFC process in > | place in the Rust community [5], the leaders of which have thought quite > | hard about fostering community growth and participation. While no > | process is perfect, I feel like the Rust process is a good starting > | point for discussion, offering enough structure to guide new > | contributors through the process while requiring only a modest > | investment of developer time. > | > | To get a sense for how well this will work in our community, I propose > | that we attempt to self-host the proposed process. To this end I have > | setup a ghc-proposals repository [6] and opened a pull request for > | discussion of the process proposal [4]. > | > | Let's see how this goes. > | > | Cheers, > | > | - Ben > | > | > | [1] > | https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fwww.red > | dit.com%2fr%2fhaskell%2fcomments%2f4oyxo2%2fblog_contributing_to_ghc%2f& > | data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c99735311c5f64cac6a6608 > | d3a83a032a%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=Hl6GqRWfu7IOQtpE > | jpfsNAkv3mmLgNKm2ciQDoMe6HA%3d > | [2] > | https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fwww.red > | dit.com%2fr%2fhaskell%2fcomments%2f4isua9%2fghc_development_outsidein%2f > | &data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c99735311c5f64cac6a660 > | 8d3a83a032a%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=bj2AQqQirX3X%2f > | 4%2fFr05eXFuD4yW0r9Nmrmdg7IGEF%2f8%3d > | [3] > | https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/AddingFeatures > | [4] https://github.com/ghc-proposals/ghc- > | proposals/pull/1/files?short_path=14d66cd#diff- > | 14d66cda32248456a5f223b6333c6132 > | [5] https://github.com/rust-lang/rfcs > | [6] https://github.com/ghc-proposals/ghc-proposals > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 12 10:58:56 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 12 Jul 2016 10:58:56 +0000 Subject: Windows build In-Reply-To: References: <0307d62076ae4d5695520241c7532767@AM3PR30MB019.064d.mgd.msft.net> <578022ee.48371c0a.49d2e.ffffb45d@mx.google.com> Message-ID: <0e7737a8c7c848d0b5dd8323e9c92ca8@DB4PR30MB030.064d.mgd.msft.net> Indeed – it works great now. Thank you! I have a working Windows build on my Surface Book. Hooray. Many thanks to everyone who helped. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Phyx Sent: 09 July 2016 18:00 To: Simon Peyton Jones via ghc-devs Subject: RE: Windows build Hi Simon, Thomie changed it so the in place gcc can be called from the testsuite. The tests should pass now. Kind regards, Tamar Sent from my Mobile On Jul 8, 2016 23:02, > wrote: Hi Simon, For these tests it shouldn’t matter much so I guess I can change them. The Windows Build guide does ask to put /mingw64/bin/ on your path. The reason I tend not to want to use GHC to compile my c files for the tests is that GHC doesn’t just pass the commands along to gcc. It adds to them. While for these single object files it doesn’t matter (as it just adds a few defines and include paths) It has bitten me multiple times in the past. Particularly when compiling shared libraries for tests since GHC wants to link in the RTS, which makes the files considerably larger and harder to debug so I tend to avoid using GHC to compile just pure C code. It means (with shared libraries) that I’ll end up with multiple RTSs loaded (at least in memory, which means stepping in gdb I have to keep track of the memory locations so I know which one I’m in). So I really wish there was a way to tell GHC not to do this. Also you’ll likely be linking against libraries compiled against other versions of GCC or MSVC so that shouldn’t be an issue. I’ll change the tests tomorrow, but I would much prefer if the testsuite can tell me where the GCC to use is, so I don’t have to use GHC. Regards, Tamar From: Simon Peyton Jones via ghc-devs Sent: Friday, July 8, 2016 22:27 To: ghc-devs at haskell.org Subject: Windows build I’ve completed a successful build on my Surface Book! Thank you. One last glitch. I’m getting the validate failure bellow. No other test requires gcc in my path. GHC itself carefully navigates to its own private gcc. Do we really want this family of tests (half a dozen variants of T11223) to rely on some random gcc, which might or might not be the same as GHC is using? Shouldn’t we use ‘ghc foo.c’? Simon =====> T11223_simple_unused_duplicate_lib(normal) 3371 of 5211 [0, 6, 1] cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-hFCtmi/test spaces/./rts/T11223/T11223_simple_unused_duplicate_lib.run" && $MAKE -s --no-print-directory t_11223_simple_unused_duplicate_lib Wrong exit code (expected 0 , actual 2 ) Stdout: Stderr: /bin/sh: gcc: command not found make[2]: *** [Makefile:42: t_11223_simple_unused_duplicate_lib] Error 127 -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Fri Jul 15 12:11:56 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 15 Jul 2016 12:11:56 +0000 Subject: Slow validate currently fails Message-ID: Hi all, Just wanted to say that HEAD (37aeff6) doesn't currently pass validate (in slow mode). See details below. ==== STAGE 2 TESTS ==== Unexpected results from: TEST="haddock.Cabal hpc_fork T4114c T4114d dynamic-paper" SUMMARY for test run started at Fri Jul 15 11:19:15 2016 UTC 0:43:36 spent to go through 5264 total tests, which gave rise to 21166 test cases, of which 3817 were skipped 197 had missing libraries 16928 expected passes 219 expected failures 0 caused framework failures 0 unexpected passes 4 unexpected failures 1 unexpected stat failures Unexpected failures: /tmp/ghctest-7VEJbb/test spaces/./dependent/should_compile/dynamic-paper.run dynamic-paper [exit code non-0] (profasm) /tmp/ghctest-7VEJbb/test spaces/./driver/T4114c.run T4114c [bad exit code] (ghci) /tmp/ghctest-7VEJbb/test spaces/./driver/T4114d.run T4114d [bad exit code] (ghci) /tmp/ghctest-7VEJbb/test spaces/../../libraries/hpc/tests/fork/hpc_fork.run hpc_fork [bad heap profile] (profasm) Unexpected stat failures: /tmp/ghctest-7VEJbb/test spaces/./perf/haddock/haddock.Cabal.run haddock.Cabal [stat not good enough] (normal) It validates in fast mode though (haven't tried the default mode). From ryan.gl.scott at gmail.com Sun Jul 17 02:02:54 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sat, 16 Jul 2016 22:02:54 -0400 Subject: Request for feedback: deriving strategies syntax Message-ID: I'm pursuing a fix to Trac #10598 [1], an issue in which GHC users do not have fine-grained control over which strategy to use when deriving an instance, especially when multiple extensions like -XGeneralizedNewtypeDeriving and -XDeriveAnyClass are enabled simultaneously. I have a working patch up at [2] which would fix the issue, but there's still a lingering question of what the right syntax is to use here. I want to make sure I get this right, so I'm requesting input from the community. To condense the conversation in [1], there are three means by which you can derive an instance in GHC today: 1. -XGeneralizedNewtypeDeriving 2. -XDeriveAnyClass 3. GHC's builtin algorithms (which are used for deriving Eq, Show, Functor, Generic, Data, etc.) The problem is that it's sometimes hard to know which of the three will kick in when you say `deriving C`. To resolve this ambiguity, I want to introduce the -XDerivingStrategies extension, where a user can explicitly request which of the above ways to derive an instance. Here are some of the previously proposed syntaxes for this feature, with their perceived pros and cons: ----- Pragmas * Examples: - newtype T a = T a deriving ({-# BUILTIN #-} Eq, {-# GND #-} Ord, {-# DAC #-} Read, Show) - deriving {-# BUILTIN #-} instance Functor T * Pros: - Backwards compatible - Requires no changes to Template Haskell * Cons: - Unlike other pragmas, these ones can affect the semantics of a program ----- Type synonyms * Examples: - newtype T a = T a deriving (Builtin Eq, GND Ord, DAC Read, Show) - deriving instance Builtin (Functor T) * Pros: - Requires no Template Haskell or parser changes, just some magic in the typechecker - Backwards compatible (back to GHC 7.6) * Cons: - Some developers objected to the idea of imbuing type synonyms with magical properties ----- Multiple deriving clauses, plus new keywords * Examples: - newtype T a = T a deriving Show deriving builtin instance (Eq, Foldable) deriving newtype instance Ord deriving anyclass instance Read - deriving builtin instance Functor T * Pros: - Doesn't suffer from the same semantic issues as the other suggestions - (Arguably) the most straightforward-looking syntax * Cons: - Requires breaking changes to Template Haskell - Changes the parser and syntax significantly Several GHC devs objected to the first two of the above suggestions in [1], so I chose to implement the "Multiple deriving clauses, plus new keywords" option in [2]. However, I'd appreciate further discussion on the above options, which one you prefer, and if you have other suggestions for syntax to use. Ryan S. ----- [1] https://ghc.haskell.org/trac/ghc/ticket/10598 [2] https://phabricator.haskell.org/D2280 From oleg.grenrus at iki.fi Sun Jul 17 09:10:41 2016 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Sun, 17 Jul 2016 12:10:41 +0300 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> Should we test drive https://github.com/ghc-proposals/ghc-proposals on this proposal? - Oleg > On 17 Jul 2016, at 05:02, Ryan Scott wrote: > > I'm pursuing a fix to Trac #10598 [1], an issue in which GHC users do > not have fine-grained control over which strategy to use when deriving > an instance, especially when multiple extensions like > -XGeneralizedNewtypeDeriving and -XDeriveAnyClass are enabled > simultaneously. I have a working patch up at [2] which would fix the > issue, but there's still a lingering question of what the right syntax > is to use here. I want to make sure I get this right, so I'm > requesting input from the community. > > To condense the conversation in [1], there are three means by which > you can derive an instance in GHC today: > > 1. -XGeneralizedNewtypeDeriving > 2. -XDeriveAnyClass > 3. GHC's builtin algorithms (which are used for deriving Eq, Show, > Functor, Generic, Data, etc.) > > The problem is that it's sometimes hard to know which of the three > will kick in when you say `deriving C`. To resolve this ambiguity, I > want to introduce the -XDerivingStrategies extension, where a user can > explicitly request which of the above ways to derive an instance. > > Here are some of the previously proposed syntaxes for this feature, > with their perceived pros and cons: > > ----- Pragmas > * Examples: > - newtype T a = T a deriving ({-# BUILTIN #-} Eq, {-# GND #-} > Ord, {-# DAC #-} Read, Show) > - deriving {-# BUILTIN #-} instance Functor T > * Pros: > - Backwards compatible > - Requires no changes to Template Haskell > * Cons: > - Unlike other pragmas, these ones can affect the semantics of a program > ----- Type synonyms > * Examples: > - newtype T a = T a deriving (Builtin Eq, GND Ord, DAC Read, Show) > - deriving instance Builtin (Functor T) > * Pros: > - Requires no Template Haskell or parser changes, just some > magic in the typechecker > - Backwards compatible (back to GHC 7.6) > * Cons: > - Some developers objected to the idea of imbuing type synonyms > with magical properties > ----- Multiple deriving clauses, plus new keywords > * Examples: > - newtype T a = T a > deriving Show > deriving builtin instance (Eq, Foldable) > deriving newtype instance Ord > deriving anyclass instance Read > - deriving builtin instance Functor T > * Pros: > - Doesn't suffer from the same semantic issues as the other suggestions > - (Arguably) the most straightforward-looking syntax > * Cons: > - Requires breaking changes to Template Haskell > - Changes the parser and syntax significantly > > Several GHC devs objected to the first two of the above suggestions in > [1], so I chose to implement the "Multiple deriving clauses, plus new > keywords" option in [2]. However, I'd appreciate further discussion on > the above options, which one you prefer, and if you have other > suggestions for syntax to use. > > Ryan S. > ----- > [1] https://ghc.haskell.org/trac/ghc/ticket/10598 > [2] https://phabricator.haskell.org/D2280 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From moritz at lichtzwerge.de Sun Jul 17 09:15:51 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Sun, 17 Jul 2016 17:15:51 +0800 Subject: Request for feedback: deriving strategies syntax In-Reply-To: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> References: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> Message-ID: <288FCCD5-30F9-45EE-944B-2BB9168210C7@lichtzwerge.de> I was going to propose this as well! Would probably provide valuable practicability feedback to the proposed proposal process. - Moritz > On Jul 17, 2016, at 5:10 PM, Oleg Grenrus wrote: > > Should we test drive https://github.com/ghc-proposals/ghc-proposals > on this proposal? > > - Oleg > >> On 17 Jul 2016, at 05:02, Ryan Scott wrote: >> >> I'm pursuing a fix to Trac #10598 [1], an issue in which GHC users do >> not have fine-grained control over which strategy to use when deriving >> an instance, especially when multiple extensions like >> -XGeneralizedNewtypeDeriving and -XDeriveAnyClass are enabled >> simultaneously. I have a working patch up at [2] which would fix the >> issue, but there's still a lingering question of what the right syntax >> is to use here. I want to make sure I get this right, so I'm >> requesting input from the community. >> >> To condense the conversation in [1], there are three means by which >> you can derive an instance in GHC today: >> >> 1. -XGeneralizedNewtypeDeriving >> 2. -XDeriveAnyClass >> 3. GHC's builtin algorithms (which are used for deriving Eq, Show, >> Functor, Generic, Data, etc.) >> >> The problem is that it's sometimes hard to know which of the three >> will kick in when you say `deriving C`. To resolve this ambiguity, I >> want to introduce the -XDerivingStrategies extension, where a user can >> explicitly request which of the above ways to derive an instance. >> >> Here are some of the previously proposed syntaxes for this feature, >> with their perceived pros and cons: >> >> ----- Pragmas >> * Examples: >> - newtype T a = T a deriving ({-# BUILTIN #-} Eq, {-# GND #-} >> Ord, {-# DAC #-} Read, Show) >> - deriving {-# BUILTIN #-} instance Functor T >> * Pros: >> - Backwards compatible >> - Requires no changes to Template Haskell >> * Cons: >> - Unlike other pragmas, these ones can affect the semantics of a program >> ----- Type synonyms >> * Examples: >> - newtype T a = T a deriving (Builtin Eq, GND Ord, DAC Read, Show) >> - deriving instance Builtin (Functor T) >> * Pros: >> - Requires no Template Haskell or parser changes, just some >> magic in the typechecker >> - Backwards compatible (back to GHC 7.6) >> * Cons: >> - Some developers objected to the idea of imbuing type synonyms >> with magical properties >> ----- Multiple deriving clauses, plus new keywords >> * Examples: >> - newtype T a = T a >> deriving Show >> deriving builtin instance (Eq, Foldable) >> deriving newtype instance Ord >> deriving anyclass instance Read >> - deriving builtin instance Functor T >> * Pros: >> - Doesn't suffer from the same semantic issues as the other suggestions >> - (Arguably) the most straightforward-looking syntax >> * Cons: >> - Requires breaking changes to Template Haskell >> - Changes the parser and syntax significantly >> >> Several GHC devs objected to the first two of the above suggestions in >> [1], so I chose to implement the "Multiple deriving clauses, plus new >> keywords" option in [2]. However, I'd appreciate further discussion on >> the above options, which one you prefer, and if you have other >> suggestions for syntax to use. >> >> Ryan S. >> ----- >> [1] https://ghc.haskell.org/trac/ghc/ticket/10598 >> [2] https://phabricator.haskell.org/D2280 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ————————————————— Moritz Angermann +49 170 54 33 0 74 moritz at lichtzwerge.de lichtzwerge GmbH Raiffeisenstr. 8 93185 Michelsneukirchen Amtsgericht Regensburg HRB 14723 Geschäftsführung: Moritz Angermann, Ralf Sangl USt-Id: DE291948767 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From vagarenko at gmail.com Sun Jul 17 10:19:17 2016 From: vagarenko at gmail.com (Alexey Vagarenko) Date: Sun, 17 Jul 2016 15:19:17 +0500 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: > > ----- Pragmas > * Examples: > - newtype T a = T a deriving ({-# BUILTIN #-} Eq, {-# GND #-} > Ord, {-# DAC #-} Read, Show) > - deriving {-# BUILTIN #-} instance Functor T > * Pros: > - Backwards compatible > - Requires no changes to Template Haskell I can't see how this doesn't require changes to Template Haskell. In order to generate a newtype declaration via TH one must use `NewtypeD Cxt Name [TyVarBndr ] (Maybe Kind ) Con Cxt ` where last `Cxt` param is a list of instanses to be derived, but `Cxt` is just `[Type]` and `Type` doesn't take any Pragmas. 2016-07-17 7:02 GMT+05:00 Ryan Scott : > I'm pursuing a fix to Trac #10598 [1], an issue in which GHC users do > not have fine-grained control over which strategy to use when deriving > an instance, especially when multiple extensions like > -XGeneralizedNewtypeDeriving and -XDeriveAnyClass are enabled > simultaneously. I have a working patch up at [2] which would fix the > issue, but there's still a lingering question of what the right syntax > is to use here. I want to make sure I get this right, so I'm > requesting input from the community. > > To condense the conversation in [1], there are three means by which > you can derive an instance in GHC today: > > 1. -XGeneralizedNewtypeDeriving > 2. -XDeriveAnyClass > 3. GHC's builtin algorithms (which are used for deriving Eq, Show, > Functor, Generic, Data, etc.) > > The problem is that it's sometimes hard to know which of the three > will kick in when you say `deriving C`. To resolve this ambiguity, I > want to introduce the -XDerivingStrategies extension, where a user can > explicitly request which of the above ways to derive an instance. > > Here are some of the previously proposed syntaxes for this feature, > with their perceived pros and cons: > > ----- Pragmas > * Examples: > - newtype T a = T a deriving ({-# BUILTIN #-} Eq, {-# GND #-} > Ord, {-# DAC #-} Read, Show) > - deriving {-# BUILTIN #-} instance Functor T > * Pros: > - Backwards compatible > - Requires no changes to Template Haskell > * Cons: > - Unlike other pragmas, these ones can affect the semantics of a > program > ----- Type synonyms > * Examples: > - newtype T a = T a deriving (Builtin Eq, GND Ord, DAC Read, Show) > - deriving instance Builtin (Functor T) > * Pros: > - Requires no Template Haskell or parser changes, just some > magic in the typechecker > - Backwards compatible (back to GHC 7.6) > * Cons: > - Some developers objected to the idea of imbuing type synonyms > with magical properties > ----- Multiple deriving clauses, plus new keywords > * Examples: > - newtype T a = T a > deriving Show > deriving builtin instance (Eq, Foldable) > deriving newtype instance Ord > deriving anyclass instance Read > - deriving builtin instance Functor T > * Pros: > - Doesn't suffer from the same semantic issues as the other > suggestions > - (Arguably) the most straightforward-looking syntax > * Cons: > - Requires breaking changes to Template Haskell > - Changes the parser and syntax significantly > > Several GHC devs objected to the first two of the above suggestions in > [1], so I chose to implement the "Multiple deriving clauses, plus new > keywords" option in [2]. However, I'd appreciate further discussion on > the above options, which one you prefer, and if you have other > suggestions for syntax to use. > > Ryan S. > ----- > [1] https://ghc.haskell.org/trac/ghc/ticket/10598 > [2] https://phabricator.haskell.org/D2280 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Jul 17 10:36:28 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 17 Jul 2016 12:36:28 +0200 Subject: Request for feedback: deriving strategies syntax In-Reply-To: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> References: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> Message-ID: <8760s44klv.fsf@smart-cactus.org> Oleg Grenrus writes: > Should we test drive https://github.com/ghc-proposals/ghc-proposals > on this proposal? > I think it would be a great idea. That being said, given that it's not be approved yet, I'm in no position to require it. Ryan, I'll leave this call up to you. If you would like to write up a proposal using the template in the repository then by all means let's give it a try. If not, then no worries; we can continue here. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ryan.gl.scott at gmail.com Sun Jul 17 12:56:44 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sun, 17 Jul 2016 08:56:44 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: <8760s44klv.fsf@smart-cactus.org> References: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> <8760s44klv.fsf@smart-cactus.org> Message-ID: Ben, > I think it would be a great idea. That being said, given that it's not > be approved yet, I'm in no position to require it. Ryan, I'll leave this > call up to you. If you would like to write up a proposal using the > template in the repository then by all means let's give it a try. > If not, then no worries; we can continue here. I hadn't thought of using ghc-proposals for this, and since it's still in a nascent state, I'll opt to continue using the GHC devs mailing list for this dicussion. Alexey, > I can't see how this doesn't require changes to Template Haskell. You are correct, I got my wires crossed when trying to recall the details. I think what I (sloppily) remembered was that in an earlier revision of https://phabricator.haskell.org/D2280, I had implemented a pragma-based approach that didn't require a language extension. But I now consider that a mistake, so I've introduced the -XDerivingStrategies extension, which should be required regardless of what syntax we decide to adopt. Ryan S. On Sun, Jul 17, 2016 at 6:36 AM, Ben Gamari wrote: > Oleg Grenrus writes: > >> Should we test drive https://github.com/ghc-proposals/ghc-proposals >> on this proposal? >> > I think it would be a great idea. That being said, given that it's not > be approved yet, I'm in no position to require it. Ryan, I'll leave this > call up to you. If you would like to write up a proposal using the > template in the repository then by all means let's give it a try. > If not, then no worries; we can continue here. > > Cheers, > > - Ben > From eacameron at gmail.com Sun Jul 17 12:59:44 2016 From: eacameron at gmail.com (Elliot Cameron) Date: Sun, 17 Jul 2016 08:59:44 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> <8760s44klv.fsf@smart-cactus.org> Message-ID: Just a quick thought: The term "built-in" seems a bit myopic IMO since all these extensions are in a sense built-in, and especially if any of them make it into Haskell 2020. I wonder if "standard" would be better or something similar. On Jul 17, 2016 08:57, "Ryan Scott" wrote: > Ben, > > > I think it would be a great idea. That being said, given that it's not > > be approved yet, I'm in no position to require it. Ryan, I'll leave this > > call up to you. If you would like to write up a proposal using the > > template in the repository then by all means let's give it a try. > > If not, then no worries; we can continue here. > > I hadn't thought of using ghc-proposals for this, and since it's still > in a nascent state, I'll opt to continue using the GHC devs mailing > list for this dicussion. > > > Alexey, > > > I can't see how this doesn't require changes to Template Haskell. > > You are correct, I got my wires crossed when trying to recall the > details. I think what I (sloppily) remembered was that in an earlier > revision of https://phabricator.haskell.org/D2280, I had implemented a > pragma-based approach that didn't require a language extension. But I > now consider that a mistake, so I've introduced the > -XDerivingStrategies extension, which should be required regardless of > what syntax we decide to adopt. > > Ryan S. > > On Sun, Jul 17, 2016 at 6:36 AM, Ben Gamari wrote: > > Oleg Grenrus writes: > > > >> Should we test drive https://github.com/ghc-proposals/ghc-proposals > >> on this proposal? > >> > > I think it would be a great idea. That being said, given that it's not > > be approved yet, I'm in no position to require it. Ryan, I'll leave this > > call up to you. If you would like to write up a proposal using the > > template in the repository then by all means let's give it a try. > > If not, then no worries; we can continue here. > > > > Cheers, > > > > - Ben > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Sun Jul 17 13:21:21 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sun, 17 Jul 2016 09:21:21 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> <8760s44klv.fsf@smart-cactus.org> Message-ID: That's an interesting thought. I only chose "builtin" since it has a history of being used for this purpose within GHC's internals [1]. That being said, "standard" does have its own problems, since several of the typeclasses covered by it (Data, Generic(1), Lift, etc.) are not part of any Haskell standard. (I don't know if that's the connotation you aimed for, but that's what I glean from reading it.) I want something that conveys the fact that when deriving this instance, GHC is using some domain-specific knowledge to derive the instance. If not "builtin" or "standard", some other possibilities I can think of are "native", "original", or "specialized". I don't know if I have a strong preference for one in particular. Another suggestion previously tossed around was "default", but I decided against that since that keyword is also used in -XDefaultSignatures, which very much has a generic programming connotation, and I didn't want users to confuse it with the -XDeriveAnyClass strategy. Ryan S. ----- [1] http://git.haskell.org/ghc.git/blob/5df92f6776b31b375a80865e7db1f330d929c18f:/compiler/typecheck/TcGenDeriv.hs#l116 On Sun, Jul 17, 2016 at 8:59 AM, Elliot Cameron wrote: > Just a quick thought: The term "built-in" seems a bit myopic IMO since all > these extensions are in a sense built-in, and especially if any of them make > it into Haskell 2020. I wonder if "standard" would be better or something > similar. > > > On Jul 17, 2016 08:57, "Ryan Scott" wrote: >> >> Ben, >> >> > I think it would be a great idea. That being said, given that it's not >> > be approved yet, I'm in no position to require it. Ryan, I'll leave this >> > call up to you. If you would like to write up a proposal using the >> > template in the repository then by all means let's give it a try. >> > If not, then no worries; we can continue here. >> >> I hadn't thought of using ghc-proposals for this, and since it's still >> in a nascent state, I'll opt to continue using the GHC devs mailing >> list for this dicussion. >> >> >> Alexey, >> >> > I can't see how this doesn't require changes to Template Haskell. >> >> You are correct, I got my wires crossed when trying to recall the >> details. I think what I (sloppily) remembered was that in an earlier >> revision of https://phabricator.haskell.org/D2280, I had implemented a >> pragma-based approach that didn't require a language extension. But I >> now consider that a mistake, so I've introduced the >> -XDerivingStrategies extension, which should be required regardless of >> what syntax we decide to adopt. >> >> Ryan S. >> >> On Sun, Jul 17, 2016 at 6:36 AM, Ben Gamari wrote: >> > Oleg Grenrus writes: >> > >> >> Should we test drive https://github.com/ghc-proposals/ghc-proposals >> >> on this proposal? >> >> >> > I think it would be a great idea. That being said, given that it's not >> > be approved yet, I'm in no position to require it. Ryan, I'll leave this >> > call up to you. If you would like to write up a proposal using the >> > template in the repository then by all means let's give it a try. >> > If not, then no worries; we can continue here. >> > >> > Cheers, >> > >> > - Ben >> > >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Sun Jul 17 19:59:27 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 17 Jul 2016 15:59:27 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> <8760s44klv.fsf@smart-cactus.org> Message-ID: Builtin sounds fine to me personally. WiredIn would also be valid , though that would overlap with some other ghc internals terminology. When the deriving strategies extension isn't enabled , what will the new semantics be when more than one strategy applies? What's our new answer there ? On Sunday, July 17, 2016, Ryan Scott wrote: > That's an interesting thought. I only chose "builtin" since it has a > history of being used for this purpose within GHC's internals [1]. > > That being said, "standard" does have its own problems, since several > of the typeclasses covered by it (Data, Generic(1), Lift, etc.) are > not part of any Haskell standard. (I don't know if that's the > connotation you aimed for, but that's what I glean from reading it.) I > want something that conveys the fact that when deriving this instance, > GHC is using some domain-specific knowledge to derive the instance. > > If not "builtin" or "standard", some other possibilities I can think > of are "native", "original", or "specialized". I don't know if I have > a strong preference for one in particular. > > Another suggestion previously tossed around was "default", but I > decided against that since that keyword is also used in > -XDefaultSignatures, which very much has a generic programming > connotation, and I didn't want users to confuse it with the > -XDeriveAnyClass strategy. > > Ryan S. > ----- > [1] > http://git.haskell.org/ghc.git/blob/5df92f6776b31b375a80865e7db1f330d929c18f:/compiler/typecheck/TcGenDeriv.hs#l116 > > On Sun, Jul 17, 2016 at 8:59 AM, Elliot Cameron > wrote: > > Just a quick thought: The term "built-in" seems a bit myopic IMO since > all > > these extensions are in a sense built-in, and especially if any of them > make > > it into Haskell 2020. I wonder if "standard" would be better or something > > similar. > > > > > > On Jul 17, 2016 08:57, "Ryan Scott" > wrote: > >> > >> Ben, > >> > >> > I think it would be a great idea. That being said, given that it's not > >> > be approved yet, I'm in no position to require it. Ryan, I'll leave > this > >> > call up to you. If you would like to write up a proposal using the > >> > template in the repository then by all means let's give it a try. > >> > If not, then no worries; we can continue here. > >> > >> I hadn't thought of using ghc-proposals for this, and since it's still > >> in a nascent state, I'll opt to continue using the GHC devs mailing > >> list for this dicussion. > >> > >> > >> Alexey, > >> > >> > I can't see how this doesn't require changes to Template Haskell. > >> > >> You are correct, I got my wires crossed when trying to recall the > >> details. I think what I (sloppily) remembered was that in an earlier > >> revision of https://phabricator.haskell.org/D2280, I had implemented a > >> pragma-based approach that didn't require a language extension. But I > >> now consider that a mistake, so I've introduced the > >> -XDerivingStrategies extension, which should be required regardless of > >> what syntax we decide to adopt. > >> > >> Ryan S. > >> > >> On Sun, Jul 17, 2016 at 6:36 AM, Ben Gamari > wrote: > >> > Oleg Grenrus > writes: > >> > > >> >> Should we test drive https://github.com/ghc-proposals/ghc-proposals > >> >> on this proposal? > >> >> > >> > I think it would be a great idea. That being said, given that it's not > >> > be approved yet, I'm in no position to require it. Ryan, I'll leave > this > >> > call up to you. If you would like to write up a proposal using the > >> > template in the repository then by all means let's give it a try. > >> > If not, then no worries; we can continue here. > >> > > >> > Cheers, > >> > > >> > - Ben > >> > > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Sun Jul 17 20:17:33 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sun, 17 Jul 2016 16:17:33 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: <3F02D2BC-CC3B-4670-9CD8-E118994574A5@iki.fi> <8760s44klv.fsf@smart-cactus.org> Message-ID: > When the deriving strategies extension isn't enabled , what will the new semantics be when more than one strategy applies? What's our new answer there ? GHC already has a process for resolving which strategy to pick in a plain old deriving statement, but it isn't well documented. I've added a section to the users' guide in [1], but I'll summarize it here. When processing a deriving statement without an explicit strategy: 1. If the typeclass is built-in (or wired-in, as you suggested), and any necessary language extension is enabled (e.g., -XDeriveFunctor for Functor), then GHC will use the corresponding built-in algorithm to derive the instance. 2. Otherwise, GHC checks if either -XGeneralizedNewtypeDeriving or -XDeriveAnyClass are enabled, and if the class is able to be derived with one of those approaches, GHC does so. If BOTH extensions are enabled and the class can be derived with either approach, GHC defaults to -XDeriveAnyClass (but also emitting a warning about the choice it made to resolve the ambiguity). 3. Otherwise, GHC errors. Again, this is current GHC behavior, not a new or proposed feature. Ryan S. ----- [1] https://phabricator.haskell.org/D2280#02e78429 From eir at cis.upenn.edu Mon Jul 18 02:24:28 2016 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Sun, 17 Jul 2016 22:24:28 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: <4E7A309C-87E2-4947-A191-CDCA391C8E83@cis.upenn.edu> Of the three options from Ryan's first email in this thread, only the third is palatable to me (with the separate `deriving` clauses). I would like to mention that I don't see any real obstacles to something like > newtype ... > deriving (Eq, default ToJSON, builtin Ord, newtype Monoid) That is, one `deriving` clause where each element is optionally prefixed with a keyword. On the ticket (#10598), it is suggested that parsing these would be hard. I agree that parsing these would be annoying, but I do not think that they are actually ambiguous. Avoiding a few hours of pain in the parser should not be our motivation for choosing a syntax we will all live with for years. For `default` and `newtype`, parsing is actually easy. If we want to keep the `builtin` pseudo-keyword, we could always parse as a type and then have some non-parser code examine the resulting AST and sort it out. (This is done in several other dark corners of the parser already.) Separately, I'm not enamored of the `builtin` keyword. The one idea I can suggest in this space (somewhat tongue-in-cheek, but feel free to take it seriously) is `bespoke` -- after all, each "builtin" instance must be generated by code written specifically for that class, which fits the English definition of bespoke nicely. "Which deriving mechanism do want?" "The bespoke one, please." And then GHC can boast that it has the classiest keyword of any programming language. :) Richard On Jul 16, 2016, at 10:02 PM, Ryan Scott wrote: > I'm pursuing a fix to Trac #10598 [1], an issue in which GHC users do > not have fine-grained control over which strategy to use when deriving > an instance, especially when multiple extensions like > -XGeneralizedNewtypeDeriving and -XDeriveAnyClass are enabled > simultaneously. I have a working patch up at [2] which would fix the > issue, but there's still a lingering question of what the right syntax > is to use here. I want to make sure I get this right, so I'm > requesting input from the community. > > To condense the conversation in [1], there are three means by which > you can derive an instance in GHC today: > > 1. -XGeneralizedNewtypeDeriving > 2. -XDeriveAnyClass > 3. GHC's builtin algorithms (which are used for deriving Eq, Show, > Functor, Generic, Data, etc.) > > The problem is that it's sometimes hard to know which of the three > will kick in when you say `deriving C`. To resolve this ambiguity, I > want to introduce the -XDerivingStrategies extension, where a user can > explicitly request which of the above ways to derive an instance. > > Here are some of the previously proposed syntaxes for this feature, > with their perceived pros and cons: > > ----- Pragmas > * Examples: > - newtype T a = T a deriving ({-# BUILTIN #-} Eq, {-# GND #-} > Ord, {-# DAC #-} Read, Show) > - deriving {-# BUILTIN #-} instance Functor T > * Pros: > - Backwards compatible > - Requires no changes to Template Haskell > * Cons: > - Unlike other pragmas, these ones can affect the semantics of a program > ----- Type synonyms > * Examples: > - newtype T a = T a deriving (Builtin Eq, GND Ord, DAC Read, Show) > - deriving instance Builtin (Functor T) > * Pros: > - Requires no Template Haskell or parser changes, just some > magic in the typechecker > - Backwards compatible (back to GHC 7.6) > * Cons: > - Some developers objected to the idea of imbuing type synonyms > with magical properties > ----- Multiple deriving clauses, plus new keywords > * Examples: > - newtype T a = T a > deriving Show > deriving builtin instance (Eq, Foldable) > deriving newtype instance Ord > deriving anyclass instance Read > - deriving builtin instance Functor T > * Pros: > - Doesn't suffer from the same semantic issues as the other suggestions > - (Arguably) the most straightforward-looking syntax > * Cons: > - Requires breaking changes to Template Haskell > - Changes the parser and syntax significantly > > Several GHC devs objected to the first two of the above suggestions in > [1], so I chose to implement the "Multiple deriving clauses, plus new > keywords" option in [2]. However, I'd appreciate further discussion on > the above options, which one you prefer, and if you have other > suggestions for syntax to use. > > Ryan S. > ----- > [1] https://ghc.haskell.org/trac/ghc/ticket/10598 > [2] https://phabricator.haskell.org/D2280 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From mail at andres-loeh.de Mon Jul 18 06:54:25 2016 From: mail at andres-loeh.de (Andres Loeh) Date: Mon, 18 Jul 2016 08:54:25 +0200 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: Hi Ryan and everyone else. Thanks for summarizing the options. As I've said before, I don't like design option 1 because I think influencing program semantics in such a drastic way isn't what pragmas should be used for. Re design option 2, what I dislike about it is indeed that the idea is that Builtin/GND/DAC are type synonyms, which aren't supposed to have any meaning in this context. I do like, however, that they're "first-class" objects in this proposal, and that it'd be easy to use something like the kind system to make more of them, and that it'd open up a road to a future where we perhaps could programmatically add more options. The objects probably shouldn't be type synonyms, but they could be special datatypes or type families, perhaps. I need to think about this some more. It ties in into some other things I'd consider nice-to-have, but I need more time to put that into a coherent story. There's nothing obviously wrong with option 3, but it seems relatively verbose (I'd prefer Richard's syntax), and feels more ad-hoc. I don't mind "builtin" to refer to the deriving mechanism, but again, I also don't mind Richard's suggestion of using "bespoke". Another suggestion would be to use "magic". Cheers, Andres On Sun, Jul 17, 2016 at 4:02 AM, Ryan Scott wrote: > I'm pursuing a fix to Trac #10598 [1], an issue in which GHC users do > not have fine-grained control over which strategy to use when deriving > an instance, especially when multiple extensions like > -XGeneralizedNewtypeDeriving and -XDeriveAnyClass are enabled > simultaneously. I have a working patch up at [2] which would fix the > issue, but there's still a lingering question of what the right syntax > is to use here. I want to make sure I get this right, so I'm > requesting input from the community. > > To condense the conversation in [1], there are three means by which > you can derive an instance in GHC today: > > 1. -XGeneralizedNewtypeDeriving > 2. -XDeriveAnyClass > 3. GHC's builtin algorithms (which are used for deriving Eq, Show, > Functor, Generic, Data, etc.) > > The problem is that it's sometimes hard to know which of the three > will kick in when you say `deriving C`. To resolve this ambiguity, I > want to introduce the -XDerivingStrategies extension, where a user can > explicitly request which of the above ways to derive an instance. > > Here are some of the previously proposed syntaxes for this feature, > with their perceived pros and cons: > > ----- Pragmas > * Examples: > - newtype T a = T a deriving ({-# BUILTIN #-} Eq, {-# GND #-} > Ord, {-# DAC #-} Read, Show) > - deriving {-# BUILTIN #-} instance Functor T > * Pros: > - Backwards compatible > - Requires no changes to Template Haskell > * Cons: > - Unlike other pragmas, these ones can affect the semantics of a program > ----- Type synonyms > * Examples: > - newtype T a = T a deriving (Builtin Eq, GND Ord, DAC Read, Show) > - deriving instance Builtin (Functor T) > * Pros: > - Requires no Template Haskell or parser changes, just some > magic in the typechecker > - Backwards compatible (back to GHC 7.6) > * Cons: > - Some developers objected to the idea of imbuing type synonyms > with magical properties > ----- Multiple deriving clauses, plus new keywords > * Examples: > - newtype T a = T a > deriving Show > deriving builtin instance (Eq, Foldable) > deriving newtype instance Ord > deriving anyclass instance Read > - deriving builtin instance Functor T > * Pros: > - Doesn't suffer from the same semantic issues as the other suggestions > - (Arguably) the most straightforward-looking syntax > * Cons: > - Requires breaking changes to Template Haskell > - Changes the parser and syntax significantly > > Several GHC devs objected to the first two of the above suggestions in > [1], so I chose to implement the "Multiple deriving clauses, plus new > keywords" option in [2]. However, I'd appreciate further discussion on > the above options, which one you prefer, and if you have other > suggestions for syntax to use. > > Ryan S. > ----- > [1] https://ghc.haskell.org/trac/ghc/ticket/10598 > [2] https://phabricator.haskell.org/D2280 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From eir at cis.upenn.edu Mon Jul 18 13:17:44 2016 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 18 Jul 2016 09:17:44 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: > On Jul 18, 2016, at 2:54 AM, Andres Loeh wrote: > > There's nothing obviously wrong with option 3, but it seems relatively > verbose (I'd prefer Richard's syntax), and feels more ad-hoc. I don't > mind "builtin" to refer to the deriving mechanism, but again, I also > don't mind Richard's suggestion of using "bespoke". Another suggestion > would be to use "magic". I thought about verbosity here, and it's not clear which one is more verbose. For example, I frequently define a new newtype and then wish to use GND to derive a whole host of instances. In this case (is it common?), `deriving (X, Y) deriving newtype (A,B,C,D,E,F)` is shorter than putting newtype on each class name. I suppose we could provide both options, but that may be one bridge too far. Richard From ryan.gl.scott at gmail.com Mon Jul 18 13:59:32 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Mon, 18 Jul 2016 09:59:32 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: Andres, > The objects probably shouldn't be type synonyms, but > they could be special datatypes or type families, perhaps. I considered that - we already have some special datatypes, type families, and type classes currently. However, neither datatypes nor type families are allowed to appear as the outermost type in an instance declaration (unless we bake in a very prominent exception to this rule), and if we imbued type classes with this magic, one might think that "deriving (GND Eq)" means we're deriving an instance for the magical GND class, not Eq. So those approaches don't sound satisfying to me on a cursory examination. Richard, > The one idea I can suggest in this space (somewhat tongue-in-cheek, but feel free to take it seriously) is `bespoke` It might be a tongue-in-cheek suggestion, but I _really_ like it. It captures the intended semantics better than any other previous suggestion, I think. And we're already going to be appropriating a new keyword with "anyclass", so why not take "bespoke" as well? :) Please stop me if I've slipped into madness here. > I thought about verbosity here, and it's not clear which one is more verbose. For example, I frequently define a new newtype and then wish to use GND to derive a whole host of instances. In this case (is it common?), `deriving (X, Y) deriving newtype (A,B,C,D,E,F)` is shorter than putting newtype on each class name. That's a good point. Another thing to consider is that I suspect in 90% of the time, users are only going to be reaching for -XDerivingStrategies in the scenario when they enable both -XGeneralizedNewtypeDeriving and -XDeriveAnyClass. That will happen when they want to derive instances for newtypes, and as you said, you typically derive several instances at a time when defining newtypes. Therefore, it seems less noisy to factor out the deriving strategy names so that readers can tell at a glance which batch of instances are newtype-derived and which are anyclass-derived, instead of having to read a keyword before every single type. Plus, on a superficial level, I like keeping the deriving strategy name outside of the parentheses. I think it makes clear that these keywords aren't modifying the type we're deriving, only the means by which we're deriving it. Of course, you may feel differently than I do, so please speak up if you disagree! Ryan S. From rae at cs.brynmawr.edu Mon Jul 18 14:28:24 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 18 Jul 2016 10:28:24 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: <92D1BB4D-80BC-4EAC-B6E0-E3A6D5CA12F3@cs.brynmawr.edu> > On Jul 18, 2016, at 9:59 AM, Ryan Scott wrote: > >> The one idea I can suggest in this space (somewhat tongue-in-cheek, but feel free to take it seriously) is `bespoke` > > It might be a tongue-in-cheek suggestion, but I _really_ like it. It > captures the intended semantics better than any other previous > suggestion, I think. And we're already going to be appropriating a new > keyword with "anyclass", so why not take "bespoke" as well? :) > > Please stop me if I've slipped into madness here. We don't actually have to worry about keyword snatching, as there is no other lowercase word that can appear directly after `deriving`. We're happily in a new part of the grammar. On "bespoke": Pros: * It has precisely the right meaning * It's a fun word. (Fellow Americans, please update if this disagrees with your experience:) At least for Americans, for whom the word is instantly associated with spiffy haberdashery sold on Savile Row, London. It was actually quite a surprise to me when living in the UK that bespoke is used for other things as well. Example from my time there: I once went on a bespoke snowshoeing trip to France. Cons: * It's a fun word. People from those other languages with their boring OOP terms already think we're strange for having monoids and monads. Not to mention profunctors. What will they think of a bespoke Functor? I'm clearly enjoying this way too much. > > Plus, on a superficial level, I like keeping the deriving strategy > name outside of the parentheses. I think it makes clear that these > keywords aren't modifying the type we're deriving, only the means by > which we're deriving it. Of course, you may feel differently than I > do, so please speak up if you disagree! I actually agree here. This keeps all the keywords clustered together instead of interspersed with the more interesting bits. Richard From mail at andres-loeh.de Mon Jul 18 14:39:23 2016 From: mail at andres-loeh.de (Andres Loeh) Date: Mon, 18 Jul 2016 16:39:23 +0200 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: > It might be a tongue-in-cheek suggestion, but I _really_ like it. It > captures the intended semantics better than any other previous > suggestion, I think. And we're already going to be appropriating a new > keyword with "anyclass", so why not take "bespoke" as well? :) It doesn't have to be a "keyword" in the sense of reservedid, right? >> I thought about verbosity here, and it's not clear which one is more verbose. For example, I frequently define a new newtype and then wish to use GND to derive a whole host of instances. In this case (is it common?), `deriving (X, Y) deriving newtype (A,B,C,D,E,F)` is shorter than putting newtype on each class name. > > That's a good point. Another thing to consider is that I suspect in > 90% of the time, users are only going to be reaching for > -XDerivingStrategies in the scenario when they enable both > -XGeneralizedNewtypeDeriving and -XDeriveAnyClass. That will happen > when they want to derive instances for newtypes, and as you said, you > typically derive several instances at a time when defining newtypes. > Therefore, it seems less noisy to factor out the deriving strategy > names so that readers can tell at a glance which batch of instances > are newtype-derived and which are anyclass-derived, instead of having > to read a keyword before every single type. Yes, you've convinced me that putting the strategy once in front is at least not worse. > Plus, on a superficial level, I like keeping the deriving strategy > name outside of the parentheses. I think it makes clear that these > keywords aren't modifying the type we're deriving, only the means by > which we're deriving it. Of course, you may feel differently than I > do, so please speak up if you disagree! The very first times when I've talked to others about this feature, I think I've always used "deriving (Eq via bespoke, Monad via gnd)" as syntax, but yes, in general I agree that keeping it completely out of the parentheses may be a mild advantage. Cheers, Andres From ryan.gl.scott at gmail.com Mon Jul 18 14:42:00 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Mon, 18 Jul 2016 10:42:00 -0400 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: > It doesn't have to be a "keyword" in the sense of reservedid, right? Correct, it's only a keyword when used in the context of deriving, similar to how "role" is only a keyword in the context of "type role". You could still define, say, `id role = role` if you so wished. Ryan S. On Mon, Jul 18, 2016 at 10:39 AM, Andres Loeh wrote: >> It might be a tongue-in-cheek suggestion, but I _really_ like it. It >> captures the intended semantics better than any other previous >> suggestion, I think. And we're already going to be appropriating a new >> keyword with "anyclass", so why not take "bespoke" as well? :) > > It doesn't have to be a "keyword" in the sense of reservedid, right? > >>> I thought about verbosity here, and it's not clear which one is more verbose. For example, I frequently define a new newtype and then wish to use GND to derive a whole host of instances. In this case (is it common?), `deriving (X, Y) deriving newtype (A,B,C,D,E,F)` is shorter than putting newtype on each class name. >> >> That's a good point. Another thing to consider is that I suspect in >> 90% of the time, users are only going to be reaching for >> -XDerivingStrategies in the scenario when they enable both >> -XGeneralizedNewtypeDeriving and -XDeriveAnyClass. That will happen >> when they want to derive instances for newtypes, and as you said, you >> typically derive several instances at a time when defining newtypes. >> Therefore, it seems less noisy to factor out the deriving strategy >> names so that readers can tell at a glance which batch of instances >> are newtype-derived and which are anyclass-derived, instead of having >> to read a keyword before every single type. > > Yes, you've convinced me that putting the strategy once in front is at > least not worse. > >> Plus, on a superficial level, I like keeping the deriving strategy >> name outside of the parentheses. I think it makes clear that these >> keywords aren't modifying the type we're deriving, only the means by >> which we're deriving it. Of course, you may feel differently than I >> do, so please speak up if you disagree! > > The very first times when I've talked to others about this feature, I > think I've always used "deriving (Eq via bespoke, Monad via gnd)" as > syntax, but yes, in general I agree that keeping it completely out of > the parentheses may be a mild advantage. > > Cheers, > Andres From simonpj at microsoft.com Mon Jul 18 21:52:37 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Jul 2016 21:52:37 +0000 Subject: [commit: ghc] wip/generics-flip: Flip around imports of GHC.Generics (cb12bdf) In-Reply-To: <20160718135716.0D1363A300@ghc.haskell.org> References: <20160718135716.0D1363A300@ghc.haskell.org> Message-ID: <8f9770eb85fc4a4a87a00bcbf64bc512@DB4PR30MB030.064d.mgd.msft.net> I don’t object to this, but I suspect that it's just covering up some other bug. Simon -----Original Message----- From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of git at git.haskell.org Sent: 18 July 2016 14:57 To: ghc-commits at haskell.org Subject: [commit: ghc] wip/generics-flip: Flip around imports of GHC.Generics (cb12bdf) Repository : ssh://git at git.haskell.org/ghc On branch : wip/generics-flip Link : https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fghc.haskell.org%2ftrac%2fghc%2fchangeset%2fcb12bdf942df5e61771d69bbb6049f3b23ed580c%2fghc&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c00c068de6b96437c02c008d3af136ead%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=20bLDBfKU23EKDZOmV1qGcs4x8k1JwDbFUwEFXEOYsw%3d >--------------------------------------------------------------- commit cb12bdf942df5e61771d69bbb6049f3b23ed580c Author: Ben Gamari Date: Mon Jul 18 15:54:16 2016 +0200 Flip around imports of GHC.Generics Previously we had, GHC.Generics imports GHC.Ptr Data.Monoid imports GHC.Generics Data.Foldable imports GHC.Generics Data.Foldable imports Data.Monoid Prelude imports Data.Foldable Unfortunately this meant that any program importing Prelude (essentially all programs) would end up pulling in GHC.Generics and GHC.Ptr unnecessarily. Hopefully helps #12367. >--------------------------------------------------------------- cb12bdf942df5e61771d69bbb6049f3b23ed580c libraries/base/Data/Foldable.hs | 36 ---------------------- libraries/base/Data/Monoid.hs | 19 +++++------- libraries/base/GHC/Generics.hs | 67 +++++++++++++++++++++++++++++++++++++++-- 3 files changed, 72 insertions(+), 50 deletions(-) Diff suppressed because of size. To see it, use: git diff-tree --root --patch-with-stat --no-color --find-copies-harder --ignore-space-at-eol --cc cb12bdf942df5e61771d69bbb6049f3b23ed580c _______________________________________________ ghc-commits mailing list ghc-commits at haskell.org https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc-commits&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c00c068de6b96437c02c008d3af136ead%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=6pgOX2cCExS7BYtk%2bNb9f4tTJQ7uuEWy5KfsmDig91Q%3d From simonpj at microsoft.com Mon Jul 18 21:54:33 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Jul 2016 21:54:33 +0000 Subject: Request for feedback: deriving strategies syntax In-Reply-To: References: Message-ID: <9965ce39e1bf4d529bfbffe9f188b631@DB4PR30MB030.064d.mgd.msft.net> I'm not following all the details here, and I do not feel strongly about syntax; but I do hope that you'll update the wiki page to reflect the discussion. Thanks Simon -----Original Message----- From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ryan Scott Sent: 18 July 2016 15:00 To: Richard Eisenberg Cc: Andres Loeh ; GHC developers Subject: Re: Request for feedback: deriving strategies syntax Andres, > The objects probably shouldn't be type synonyms, but they could be > special datatypes or type families, perhaps. I considered that - we already have some special datatypes, type families, and type classes currently. However, neither datatypes nor type families are allowed to appear as the outermost type in an instance declaration (unless we bake in a very prominent exception to this rule), and if we imbued type classes with this magic, one might think that "deriving (GND Eq)" means we're deriving an instance for the magical GND class, not Eq. So those approaches don't sound satisfying to me on a cursory examination. Richard, > The one idea I can suggest in this space (somewhat tongue-in-cheek, > but feel free to take it seriously) is `bespoke` It might be a tongue-in-cheek suggestion, but I _really_ like it. It captures the intended semantics better than any other previous suggestion, I think. And we're already going to be appropriating a new keyword with "anyclass", so why not take "bespoke" as well? :) Please stop me if I've slipped into madness here. > I thought about verbosity here, and it's not clear which one is more verbose. For example, I frequently define a new newtype and then wish to use GND to derive a whole host of instances. In this case (is it common?), `deriving (X, Y) deriving newtype (A,B,C,D,E,F)` is shorter than putting newtype on each class name. That's a good point. Another thing to consider is that I suspect in 90% of the time, users are only going to be reaching for -XDerivingStrategies in the scenario when they enable both -XGeneralizedNewtypeDeriving and -XDeriveAnyClass. That will happen when they want to derive instances for newtypes, and as you said, you typically derive several instances at a time when defining newtypes. Therefore, it seems less noisy to factor out the deriving strategy names so that readers can tell at a glance which batch of instances are newtype-derived and which are anyclass-derived, instead of having to read a keyword before every single type. Plus, on a superficial level, I like keeping the deriving strategy name outside of the parentheses. I think it makes clear that these keywords aren't modifying the type we're deriving, only the means by which we're deriving it. Of course, you may feel differently than I do, so please speak up if you disagree! Ryan S. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.haskell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc-devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7cea562a7e9e494f07cede08d3af13cbc7%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=rKTWOkEZsKUdDOTnk7WL2BNx1lf36uelef4JDg0pX44%3d From ben at smart-cactus.org Mon Jul 18 23:18:56 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Jul 2016 01:18:56 +0200 Subject: [commit: ghc] wip/generics-flip: Flip around imports of GHC.Generics (cb12bdf) In-Reply-To: <8f9770eb85fc4a4a87a00bcbf64bc512@DB4PR30MB030.064d.mgd.msft.net> References: <20160718135716.0D1363A300@ghc.haskell.org> <8f9770eb85fc4a4a87a00bcbf64bc512@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <8760s2357j.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I don’t object to this, but I suspect that it's just covering up some other bug. > Indeed, I concluded the same after our chat today. I've made a bit of progress tracking down the cause of GHC.Generics being loaded when looking for a `Foldable []` instance. It looks like the cause is the evaluation of the `mod` binding in IfaceEnv.instIsVisible. The RHS of this binding pulls on is_dfun to get its `idName` (which it only need the Module from). I'm afraid I'll need to pick this up tomorrow though; it's getting rather late. I'll paste the above in the ticket to ensure it isn't lost. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From omeragacan at gmail.com Tue Jul 19 09:04:09 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 19 Jul 2016 09:04:09 +0000 Subject: Parser changes for supporting top-level SCC annotations In-Reply-To: References: <82C83F49-4D45-4118-B0AC-5C687F931206@cis.upenn.edu> Message-ID: I managed to do this without introducing any new pragmas. I added a new production that doesn't look for SCC annotations, for top-level expressions. I then used it in decl_no_th and topdecl. I'm not sure if I broke anything though. I'll validate in slow mode now. Patch is here: http://phabricator.haskell.org/D2407 2016-06-01 12:55 GMT+00:00 Ömer Sinan Ağacan : > I was actually trying to avoid that, thinking that it'd be best if SCC uniformly > worked for top-levels and expressions. But then this new form: > > {-# SCC f "f_scc" #-} > > Would only work for toplevel SCCs.. So maybe it's OK to introduce a new pragma > here. > > 2016-06-01 8:13 GMT-04:00 Richard Eisenberg : >> What about just using a new pragma? >> >>> {-# SCC_FUNCTION f "f_scc" #-} >>> f True = ... >>> f False = ... >> >> The pragma takes the name of the function (a single identifier) and the name of the SCC. If you wish both to have the same name, you can leave off the SCC name. >> >> It seems worth it to me to introduce a new pragma here. >> >> Richard >> >> On May 30, 2016, at 3:14 PM, Ömer Sinan Ağacan wrote: >> >>> I'm trying to support SCCs at the top-level. The implementation should be >>> trivial except the parsing part turned out to be tricky. Since expressions can >>> appear at the top-level, after a {-# SCC ... #-} parser can't decide whether to >>> reduce the token in `sigdecl` to generate a `(LHsDecl (Sig (SCCSig ...)))` or to >>> keep shifting to parse an expression. As shifting is the default behavior when a >>> shift/reduce conflict happens, it's always trying to parse an expression, which >>> is always the wrong thing to do. >>> >>> Does anyone have any ideas on how to handle this? >>> >>> Motivation: Not having SCCs at the top level is becoming annoying real quick. >>> For simplest cases, it's possible to do this transformation: >>> >>> f x y = ... >>> => >>> f = {-# SCC f #-} \x y -> ... >>> >>> However, it doesn't work when there's a `where` clause: >>> >>> f x y = >>> where t = ... >>> => >>> f = {-# SCC f #-} \x y -> >>> where t = ... >>> >>> Or when we have a "equation style" definition: >>> >>> f (C1 ...) = ... >>> f (C2 ...) = ... >>> f (C3 ...) = ... >>> ... >>> >>> (usual solution is to rename `f` to `f'` and define a new `f` with a `SCC`) >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> From ben at smart-cactus.org Tue Jul 19 09:39:06 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Jul 2016 11:39:06 +0200 Subject: Parser changes for supporting top-level SCC annotations In-Reply-To: References: <82C83F49-4D45-4118-B0AC-5C687F931206@cis.upenn.edu> Message-ID: <871t2q2chx.fsf@smart-cactus.org> Ömer Sinan Ağacan writes: > I managed to do this without introducing any new pragmas. I added a new > production that doesn't look for SCC annotations, for top-level expressions. I > then used it in decl_no_th and topdecl. > Yay! I'll admit I wasn't a fan of the SCC_FUNCTION proposal. SCC pragmas are already rather wordy with the {-# #-} delimiters and quotes. SCC_FUNCTION is even worse. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From simonpj at microsoft.com Tue Jul 19 09:57:10 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 19 Jul 2016 09:57:10 +0000 Subject: Parser changes for supporting top-level SCC annotations In-Reply-To: References: <82C83F49-4D45-4118-B0AC-5C687F931206@cis.upenn.edu> Message-ID: <040e70d2b6c849b2ae1058361d35056a@DB4PR30MB030.064d.mgd.msft.net> Terrific! Is there a ticket? A wiki page with a specification? Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ömer | Sinan Agacan | Sent: 19 July 2016 10:04 | To: ghc-devs | Subject: Re: Parser changes for supporting top-level SCC annotations | | I managed to do this without introducing any new pragmas. I added a | new production that doesn't look for SCC annotations, for top-level | expressions. I then used it in decl_no_th and topdecl. | | I'm not sure if I broke anything though. I'll validate in slow mode | now. | | Patch is here: | https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fphabri | cator.haskell.org%2fD2407&data=01%7c01%7csimonpj%40064d.mgd.microsoft. | com%7c86268f4443a94ed46b1808d3afb3bf09%7c72f988bf86f141af91ab2d7cd011d | b47%7c1&sdata=jSC3m%2fCa3PJwhfmWbMzAawaN2hOE3oKnRbNdEb%2bUObo%3d | | 2016-06-01 12:55 GMT+00:00 Ömer Sinan Ağacan : | > I was actually trying to avoid that, thinking that it'd be best if | SCC | > uniformly worked for top-levels and expressions. But then this new | form: | > | > {-# SCC f "f_scc" #-} | > | > Would only work for toplevel SCCs.. So maybe it's OK to introduce a | > new pragma here. | > | > 2016-06-01 8:13 GMT-04:00 Richard Eisenberg : | >> What about just using a new pragma? | >> | >>> {-# SCC_FUNCTION f "f_scc" #-} | >>> f True = ... | >>> f False = ... | >> | >> The pragma takes the name of the function (a single identifier) and | the name of the SCC. If you wish both to have the same name, you can | leave off the SCC name. | >> | >> It seems worth it to me to introduce a new pragma here. | >> | >> Richard | >> | >> On May 30, 2016, at 3:14 PM, Ömer Sinan Ağacan | wrote: | >> | >>> I'm trying to support SCCs at the top-level. The implementation | >>> should be trivial except the parsing part turned out to be tricky. | >>> Since expressions can appear at the top-level, after a {-# SCC ... | >>> #-} parser can't decide whether to reduce the token in `sigdecl` | to | >>> generate a `(LHsDecl (Sig (SCCSig ...)))` or to keep shifting to | >>> parse an expression. As shifting is the default behavior when a | >>> shift/reduce conflict happens, it's always trying to parse an | expression, which is always the wrong thing to do. | >>> | >>> Does anyone have any ideas on how to handle this? | >>> | >>> Motivation: Not having SCCs at the top level is becoming annoying | real quick. | >>> For simplest cases, it's possible to do this transformation: | >>> | >>> f x y = ... | >>> => | >>> f = {-# SCC f #-} \x y -> ... | >>> | >>> However, it doesn't work when there's a `where` clause: | >>> | >>> f x y = | >>> where t = ... | >>> => | >>> f = {-# SCC f #-} \x y -> | >>> where t = ... | >>> | >>> Or when we have a "equation style" definition: | >>> | >>> f (C1 ...) = ... | >>> f (C2 ...) = ... | >>> f (C3 ...) = ... | >>> ... | >>> | >>> (usual solution is to rename `f` to `f'` and define a new `f` with | a | >>> `SCC`) _______________________________________________ | >>> ghc-devs mailing list | >>> ghc-devs at haskell.org | >>> | https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail | >>> .haskell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc- | devs&data=01%7c01% | >>> | 7csimonpj%40064d.mgd.microsoft.com%7c86268f4443a94ed46b1808d3afb3bf0 | >>> | 9%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=wd7%2fbeNxlmYHdZ8sfR% | >>> 2fXMl54PR0Pcg70huCmbow8M7Y%3d | >> | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.h | askell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc- | devs&data=01%7c01%7csimonpj%40064d.mgd.microsoft.com%7c86268f4443a94ed | 46b1808d3afb3bf09%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=wd7%2fb | eNxlmYHdZ8sfR%2fXMl54PR0Pcg70huCmbow8M7Y%3d From omeragacan at gmail.com Tue Jul 19 16:08:02 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 19 Jul 2016 16:08:02 +0000 Subject: Parser changes for supporting top-level SCC annotations In-Reply-To: <040e70d2b6c849b2ae1058361d35056a@DB4PR30MB030.064d.mgd.msft.net> References: <82C83F49-4D45-4118-B0AC-5C687F931206@cis.upenn.edu> <040e70d2b6c849b2ae1058361d35056a@DB4PR30MB030.064d.mgd.msft.net> Message-ID: 2016-07-19 9:57 GMT+00:00 Simon Peyton Jones : > Is there a ticket? A wiki page with a specification? I updated the user manual. There's no wiki, it just adds supports for SCC annotations at the top-level. From omeragacan at gmail.com Tue Jul 19 16:09:56 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 19 Jul 2016 16:09:56 +0000 Subject: first unboxed sums patch is ready for reviews In-Reply-To: References: Message-ID: I just got Simon's approval and I'm going to push it tomorrow (need to add some more documentation) if no one asks for more things. 2016-07-09 12:55 GMT+00:00 Ömer Sinan Ağacan : > Hi all, > > I'm almost done with the unboxed sums patch and I'd like to get some reviews at > this point. > > https://phabricator.haskell.org/D2259 > > Two key files in the patch are UnariseStg.hs and RepType.hs. > > For the example programs see files in testsuite/tests/unboxedsums/ > > In addition to any comments about the code and documentation, it'd be > appreciated if you tell me about some potential uses of unboxed sums, example > programs, edge cases etc. so that I can test it a bit more and make sure the > generated code is good. > > Thanks, > > Omer From omeragacan at gmail.com Wed Jul 20 08:24:50 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 20 Jul 2016 08:24:50 +0000 Subject: Note [Api annotations] Message-ID: I see some weird comments like -- - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnOpen', -- 'ApiAnnotation.AnnVbar','ApiAnnotation.AnnComma', -- 'ApiAnnotation.AnnClose' -- For details on above see note [Api annotations] in ApiAnnotation in some files, but Note [Api annotations] in compiler/parser/ApiAnnotation.hs doesn't say anything about those comments. Can someone update the note to explain what are those comments for? From ben at well-typed.com Wed Jul 20 09:36:55 2016 From: ben at well-typed.com (Ben Gamari) Date: Wed, 20 Jul 2016 11:36:55 +0200 Subject: Proposal process status Message-ID: <87mvlc1wi0.fsf@smart-cactus.org> Hello everyone, As you hopefully know, a few weeks ago we proposed a new process [1] for collecting, discussing, and deciding upon changes to GHC and its Haskell superset. While we have been happy to see a small contingent of contributors join the discussion, the number is significantly smaller than the set who took part in the earlier Reddit discussions. In light of this, we are left a bit uncertain of how to proceed. So, we would like to ask you to let us know your feelings regarding the proposed process: * Do you feel the proposed process is an improvement over the status quo? * Why? (this needn't be long, just a sentence hitting the major points) * What would you like to see changed in the proposed process, if anything? That's all. Again, feel free to reply either on the GitHub pull request [1] or this thread if you would prefer. Your response needn't be long; we just want to get a sense of how much of the community feels that 1) this effort is worth undertaking, and 2) that the proposal before us is in fact an improvement over the current state of affairs. Thanks for your help! Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals/pull/1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From matthewtpickering at gmail.com Wed Jul 20 10:36:01 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 20 Jul 2016 11:36:01 +0100 Subject: Note [Api annotations] In-Reply-To: References: Message-ID: These comments are meant to indicate which annotations each AST fragment has. However, they are rarely kept up to date and ultimately not that useful. If someone wants to know then it is easier to look at the `Annotate` module in `ghc-exactprint`where this information also exists programatically. On Wed, Jul 20, 2016 at 9:24 AM, Ömer Sinan Ağacan wrote: > I see some weird comments like > > -- - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnOpen', > -- 'ApiAnnotation.AnnVbar','ApiAnnotation.AnnComma', > -- 'ApiAnnotation.AnnClose' > > -- For details on above see note [Api annotations] in ApiAnnotation > > in some files, but Note [Api annotations] in compiler/parser/ApiAnnotation.hs > doesn't say anything about those comments. Can someone update the note to > explain what are those comments for? > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From thomasmiedema at gmail.com Wed Jul 20 10:45:28 2016 From: thomasmiedema at gmail.com (Thomas Miedema) Date: Wed, 20 Jul 2016 12:45:28 +0200 Subject: Proposal process status In-Reply-To: <87mvlc1wi0.fsf@smart-cactus.org> References: <87mvlc1wi0.fsf@smart-cactus.org> Message-ID: > > * What would you like to see changed in the proposed process, if > anything? > *Simon Peyton Jones as Benevolent Dictator For Life (BDFL)* If the BDFL had made a simple YES/NO decision on ShortImports [1] and ArgumentDo [2], we wouldn't be here talking about process proposals, Anthony wouldn't be mad, everything would be fine. We don't need another Haskell committee. * Keep using Trac for proposals, but use the description field of a ticket for the specification, instead of separate wiki page. * Add better filtering possibilities to Trac (say someone wants to only subscribe to tickets where syntax extensions are discussed). Adding better filtering possibilities will also benefit bug fixers (say someone wants to only subscribe to bugs on Windows or with keyword=PatternSynonyms). * Don't let hotly debated feature requests go without a resolution. [0] https://en.wikipedia.org/wiki/Benevolent_dictator_for_life [1] https://ghc.haskell.org/trac/ghc/ticket/10478 [2] https://ghc.haskell.org/trac/ghc/ticket/10843 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz at gmail.com Wed Jul 20 11:02:47 2016 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Wed, 20 Jul 2016 13:02:47 +0200 Subject: Proposal process status In-Reply-To: <87mvlc1wi0.fsf@smart-cactus.org> References: <87mvlc1wi0.fsf@smart-cactus.org> Message-ID: On 20 July 2016 at 11:36, Ben Gamari wrote: > * Do you feel the proposed process is an improvement over the status > quo? > > * Why? (this needn't be long, just a sentence hitting the major points) For a considerable part of the community it seems to be an improvement, so yes. > * What would you like to see changed in the proposed process, if > anything? (Require?) bi-directional sync of tickets (patch, issue, rfc). From ben at smart-cactus.org Wed Jul 20 11:22:59 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 20 Jul 2016 13:22:59 +0200 Subject: Note [Api annotations] In-Reply-To: References: Message-ID: <87eg6o1rl8.fsf@smart-cactus.org> Matthew Pickering writes: > These comments are meant to indicate which annotations each AST > fragment has. However, they are rarely kept up to date and ultimately > not that useful. If someone wants to > know then it is easier to look at the `Annotate` module in > `ghc-exactprint`where this information also exists programatically. > Hmm, the fact that this module is outside of GHC is a bit unfortunate. If I'm looking at a random GHC commit from a year ago how am I to know which annotations I can expect? I guess the testsuite? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From shumovichy at gmail.com Wed Jul 20 11:42:31 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Wed, 20 Jul 2016 14:42:31 +0300 Subject: Proposal process status In-Reply-To: <87mvlc1wi0.fsf@smart-cactus.org> References: <87mvlc1wi0.fsf@smart-cactus.org> Message-ID: <1469014951.4633.2.camel@gmail.com> Looks like reddit is a wrong place, so I'm replicating my comment here: On Wed, 2016-07-20 at 11:36 +0200, Ben Gamari wrote: > Hello everyone, > > As you hopefully know, a few weeks ago we proposed a new process [1] > for > collecting, discussing, and deciding upon changes to GHC and its > Haskell > superset. While we have been happy to see a small contingent of > contributors join the discussion, the number is significantly smaller > than the set who took part in the earlier Reddit discussions. > > In light of this, we are left a bit uncertain of how to proceed. So, > we would like to ask you to let us know your feelings regarding the > proposed process: > >   * Do you feel the proposed process is an improvement over the > status >     quo? Yes, definitely. The existing process is too vague, so formalizing it is a win in any case. > >   * Why? (this needn't be long, just a sentence hitting the major > points) > >   * What would you like to see changed in the proposed process, if >     anything? The proposed process overlaps with the Language Committee powers. In theory the Committee works on language standard, but de facto Haskell is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC adds new extension to Haskell. So I'd like the process to enforce separation between experimental extensions (not recommended in production code) and language improvements. I'd like the process to specify how the GHC Committee is going to communicate and share powers with the Language Committee. Thanks, Yuras. > > That's all. Again, feel free to reply either on the GitHub pull > request > [1] or this thread if you would prefer. Your response needn't be > long; > we just want to get a sense of how much of the community feels that > 1) > this effort is worth undertaking, and 2) that the proposal before us > is > in fact an improvement over the current state of affairs. > > Thanks for your help! > > Cheers, > > - Ben > > > [1] https://github.com/ghc-proposals/ghc-proposals/pull/1 > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-user > s From matthewtpickering at gmail.com Wed Jul 20 12:01:48 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 20 Jul 2016 13:01:48 +0100 Subject: Note [Api annotations] In-Reply-To: <87eg6o1rl8.fsf@smart-cactus.org> References: <87eg6o1rl8.fsf@smart-cactus.org> Message-ID: On Wed, Jul 20, 2016 at 12:22 PM, Ben Gamari wrote: > Matthew Pickering writes: > >> These comments are meant to indicate which annotations each AST >> fragment has. However, they are rarely kept up to date and ultimately >> not that useful. If someone wants to >> know then it is easier to look at the `Annotate` module in >> `ghc-exactprint`where this information also exists programatically. >> > Hmm, the fact that this module is outside of GHC is a bit unfortunate. > If I'm looking at a random GHC commit from a year ago how am I to know > which annotations I can expect? I guess the testsuite? The only reliable way is to look at `Parser.y`. > > Cheers, > > - Ben > From alan.zimm at gmail.com Wed Jul 20 12:42:46 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 20 Jul 2016 14:42:46 +0200 Subject: Proposal process status In-Reply-To: <1469014951.4633.2.camel@gmail.com> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> Message-ID: I think the most important thing is to be able to point to a designated point where discussions must take place. This means if anything comes up elsewhere it can be routed there to be "official". Alan On Wed, Jul 20, 2016 at 1:42 PM, Yuras Shumovich wrote: > > Looks like reddit is a wrong place, so I'm replicating my comment here: > > On Wed, 2016-07-20 at 11:36 +0200, Ben Gamari wrote: > > Hello everyone, > > > > As you hopefully know, a few weeks ago we proposed a new process [1] > > for > > collecting, discussing, and deciding upon changes to GHC and its > > Haskell > > superset. While we have been happy to see a small contingent of > > contributors join the discussion, the number is significantly smaller > > than the set who took part in the earlier Reddit discussions. > > > > In light of this, we are left a bit uncertain of how to proceed. So, > > we would like to ask you to let us know your feelings regarding the > > proposed process: > > > > * Do you feel the proposed process is an improvement over the > > status > > quo? > > Yes, definitely. The existing process is too vague, so formalizing it > is a win in any case. > > > > > > * Why? (this needn't be long, just a sentence hitting the major > > points) > > > > * What would you like to see changed in the proposed process, if > > anything? > > > The proposed process overlaps with the Language Committee powers. In > theory the Committee works on language standard, but de facto Haskell > is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC > adds new extension to Haskell. So I'd like the process to enforce > separation between experimental extensions (not recommended in > production code) and language improvements. I'd like the process to > specify how the GHC Committee is going to communicate and share powers > with the Language Committee. > > Thanks, > Yuras. > > > > > That's all. Again, feel free to reply either on the GitHub pull > > request > > [1] or this thread if you would prefer. Your response needn't be > > long; > > we just want to get a sense of how much of the community feels that > > 1) > > this effort is worth undertaking, and 2) that the proposal before us > > is > > in fact an improvement over the current state of affairs. > > > > Thanks for your help! > > > > Cheers, > > > > - Ben > > > > > > [1] https://github.com/ghc-proposals/ghc-proposals/pull/1 > > _______________________________________________ > > Glasgow-haskell-users mailing list > > Glasgow-haskell-users at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-user > > s > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Jul 20 12:50:12 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 20 Jul 2016 14:50:12 +0200 Subject: Note [Api annotations] In-Reply-To: References: <87eg6o1rl8.fsf@smart-cactus.org> Message-ID: Hopefully at a future date they will be a part of the AST itself, so this will be clearer. In reality, looking at Parser.y is not enough, as there are some workings in RdrHsSyn.hs too, and the process of attachment in Parser.y is sometimes quite complex. Basically the best reference is indeed ghc-exactprint, being an application that expressly makes use of all of them. Alan On Wed, Jul 20, 2016 at 2:01 PM, Matthew Pickering < matthewtpickering at gmail.com> wrote: > On Wed, Jul 20, 2016 at 12:22 PM, Ben Gamari wrote: > > Matthew Pickering writes: > > > >> These comments are meant to indicate which annotations each AST > >> fragment has. However, they are rarely kept up to date and ultimately > >> not that useful. If someone wants to > >> know then it is easier to look at the `Annotate` module in > >> `ghc-exactprint`where this information also exists programatically. > >> > > Hmm, the fact that this module is outside of GHC is a bit unfortunate. > > If I'm looking at a random GHC commit from a year ago how am I to know > > which annotations I can expect? I guess the testsuite? > > The only reliable way is to look at `Parser.y`. > > > > > Cheers, > > > > - Ben > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jul 20 14:33:47 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 20 Jul 2016 14:33:47 +0000 Subject: git.haskell.org slow Message-ID: Is it my imagination or is git.haskell.org really slow at the moment? SImion -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Wed Jul 20 16:25:49 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Wed, 20 Jul 2016 09:25:49 -0700 Subject: Proposal process status In-Reply-To: <87mvlc1wi0.fsf@smart-cactus.org> References: <87mvlc1wi0.fsf@smart-cactus.org> Message-ID: Hello Ben, I posted this when you originally asked for feed-back, but perhaps it got buried among the rest of the e-mails. I think the proposal sounds fairly reasonable, but it is hard to say how well it will work in practice until we try it, and we should be ready to change it if needs be. Some clarifying questions on the intended process: 1. After submitting the initial merge request, is the person making the proposal to wait for any kind of acknowledgment, or just move on to step 2? 2. Is the discussion going to happen on one of the mailing lists, if so which? Is it the job of the proposing person to involve/notify the committee about the discussion? If so, how are they to find out who is on the committee? 3. How does one actually perform step 3, another pull request or simply an e-mail to someone? Typo: two separate bullets in the proposal are labelled as 4. Cheers, -Iavor On Wed, Jul 20, 2016 at 2:36 AM, Ben Gamari wrote: > > Hello everyone, > > As you hopefully know, a few weeks ago we proposed a new process [1] for > collecting, discussing, and deciding upon changes to GHC and its Haskell > superset. While we have been happy to see a small contingent of > contributors join the discussion, the number is significantly smaller > than the set who took part in the earlier Reddit discussions. > > In light of this, we are left a bit uncertain of how to proceed. So, > we would like to ask you to let us know your feelings regarding the > proposed process: > > * Do you feel the proposed process is an improvement over the status > quo? > > * Why? (this needn't be long, just a sentence hitting the major points) > > * What would you like to see changed in the proposed process, if > anything? > > That's all. Again, feel free to reply either on the GitHub pull request > [1] or this thread if you would prefer. Your response needn't be long; > we just want to get a sense of how much of the community feels that 1) > this effort is worth undertaking, and 2) that the proposal before us is > in fact an improvement over the current state of affairs. > > Thanks for your help! > > Cheers, > > - Ben > > > [1] https://github.com/ghc-proposals/ghc-proposals/pull/1 > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed Jul 20 16:37:38 2016 From: ben at well-typed.com (Ben Gamari) Date: Wed, 20 Jul 2016 18:37:38 +0200 Subject: Proposal process status In-Reply-To: <1469014951.4633.2.camel@gmail.com> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> Message-ID: <874m7k1d0t.fsf@smart-cactus.org> Yuras Shumovich writes: > Looks like reddit is a wrong place, so I'm replicating my comment here: > Thanks for your comments Yuras! >>   * Do you feel the proposed process is an improvement over the >> status quo? > > Yes, definitely. The existing process is too vague, so formalizing it > is a win in any case. > Good to hear. >>   * What would you like to see changed in the proposed process, if >>     anything? > > The proposed process overlaps with the Language Committee powers. In > theory the Committee works on language standard, but de facto Haskell > is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC > adds new extension to Haskell. So I'd like the process to enforce > separation between experimental extensions (not recommended in > production code) and language improvements. I'd like the process to > specify how the GHC Committee is going to communicate and share powers > with the Language Committee. > To clarify I think Language Committee here refers to the Haskell Prime committee, right? I think these two bodies really do serve different purposes. Historically the Haskell Prime committee has been quite conservative in the sorts of changes that they standardized; as far as I know almost all of them come from a compiler. I would imagine that the GHC Committee would be a gate-keeper for proposals entering GHC and only some time later, when the semantics and utility of the extension are well-understood, would the Haskell Prime committee consider introducing it to the Report. As far as I understand it, this is historically how things have worked in the past, and I don't think this new process would change that. Of course, let me know if I'm off-base here. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ben at well-typed.com Wed Jul 20 16:45:05 2016 From: ben at well-typed.com (Ben Gamari) Date: Wed, 20 Jul 2016 18:45:05 +0200 Subject: Proposal process status In-Reply-To: References: <87mvlc1wi0.fsf@smart-cactus.org> Message-ID: <871t2o1coe.fsf@smart-cactus.org> Iavor Diatchki writes: > Hello Ben, > > I posted this when you originally asked for feed-back, but perhaps it > got buried among the rest of the e-mails. > Indeed it seems that way. Sorry about that! > I think the proposal sounds fairly reasonable, but it is hard to say how > well it will work in practice until we try it, and we should be ready to > change it if needs be. > Right. I fully expect that we will have to iterate on it. > Some clarifying questions on the intended process: > 1. After submitting the initial merge request, is the person making the > proposal to wait for any kind of acknowledgment, or just move on to step 2? > The discussion phase can happen asynchronously from any action by the Committee. Of course, the Committee should engauge in discussion early, but I don't think any sort of acknowledgement is needed. An open pull request should be taken to mean "let's discuss this idea." > 2. Is the discussion going to happen on one of the mailing lists, if so > which? Is it the job of the proposing person to involve/notify the > committee about the discussion? If so, how are they to find out who is on > the committee? The proposed process places the discussion in a pull request. The idea here is to use well-understood and widely-used code review tools to faciliate the conversation. The Committee members will be notified of the open pull request by the usual event notification mechanism (e.g. in GitHub one can subscribe to a repository). > 3. How does one actually perform step 3, another pull request or simply > an e-mail to someone? > The opening of the pull request would mark the beginning of the discussion period. When the author feels that the discussion has come to something of a conclusion, they will request that the GHC Committee consider the proposal for acceptable by leaving a comment on the pull request. > Typo: two separate bullets in the proposal are labelled as 4. > I believe this should be fixed now. Thanks! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From metaniklas at gmail.com Wed Jul 20 19:39:22 2016 From: metaniklas at gmail.com (Niklas Larsson) Date: Wed, 20 Jul 2016 21:39:22 +0200 Subject: Proposal process status In-Reply-To: References: <87mvlc1wi0.fsf@smart-cactus.org> <871t2o1coe.fsf@smart-cactus.org> Message-ID: <6E758950-7FD9-4243-9F9E-75358DAF131E@gmail.com> > 20 juli 2016 kl. 19:38 skrev amindfv at gmail.com: > > > >> El 20 jul 2016, a las 12:45, Ben Gamari escribió: >> >> Iavor Diatchki writes: >> >>> Hello Ben, >>> >>> I posted this when you originally asked for feed-back, but perhaps it >>> got buried among the rest of the e-mails. >> Indeed it seems that way. Sorry about that! >> >>> I think the proposal sounds fairly reasonable, but it is hard to say how >>> well it will work in practice until we try it, and we should be ready to >>> change it if needs be. >> Right. I fully expect that we will have to iterate on it. >> >>> Some clarifying questions on the intended process: >>> 1. After submitting the initial merge request, is the person making the >>> proposal to wait for any kind of acknowledgment, or just move on to step 2? >> The discussion phase can happen asynchronously from any action by the >> Committee. Of course, the Committee should engauge in discussion early, >> but I don't think any sort of acknowledgement is needed. An open pull >> request should be taken to mean "let's discuss this idea." >> >>> 2. Is the discussion going to happen on one of the mailing lists, if so >>> which? Is it the job of the proposing person to involve/notify the >>> committee about the discussion? If so, how are they to find out who is on >>> the committee? >> >> The proposed process places the discussion in a pull request. The idea >> here is to use well-understood and widely-used code review tools to >> faciliate the conversation. > > This part runs strongly against the grain of what I'd prefer: email is lightweight, decentralized, standard, and has many clients. We can read discussion of Haskell proposals any way we like. Github on the other hand only allows us to read issues by going to Github, and using whatever interface Github has given us (which personally I find very annoying, esp. on mobile). In addition, reading proposals offline becomes very difficult. Many of us read discussion when commuting, where, e.g. in NYC, there isn't cell service. > > For reviewing code that implements a proposal, I'm a lot more flexible (although again I'm not a fan of Github) > > For the people who like having history tracked with git: gitit is a possibility, and is written in Haskell. > > Tom > It's possible both follow and contribute to issues in a github repo via email. I do it all the time for Idris. // Niklas > > >> The Committee members will be notified of the open pull request by the >> usual event notification mechanism (e.g. in GitHub one can subscribe to >> a repository). >> >>> 3. How does one actually perform step 3, another pull request or simply >>> an e-mail to someone? >> The opening of the pull request would mark the beginning of the >> discussion period. When the author feels that the discussion has come to >> something of a conclusion, they will request that the GHC Committee >> consider the proposal for acceptable by leaving a comment on the pull >> request. >> >>> Typo: two separate bullets in the proposal are labelled as 4. >> I believe this should be fixed now. Thanks! >> >> Cheers, >> >> - Ben >> >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users From ben at smart-cactus.org Wed Jul 20 20:18:26 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 20 Jul 2016 22:18:26 +0200 Subject: Should we send Travis messages to ghc-builds? Message-ID: <8737n4ysfh.fsf@smart-cactus.org> Hello everyone, While picking up the pieces from a failed merge today I realized that we currently spend a fair bit of carbon footprint and CPU cycles making Travis test GHC yet the results of these tests aren't pushed anywhere. Would anyone object to having Travis push notifications of changes in red/green state to ghc-builds at haskell.org? Perhaps this will allow some of us to react more quickly to regressions. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ben at smart-cactus.org Wed Jul 20 22:15:50 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 21 Jul 2016 00:15:50 +0200 Subject: Should we send Travis messages to ghc-builds? In-Reply-To: <8737n4ysfh.fsf@smart-cactus.org> References: <8737n4ysfh.fsf@smart-cactus.org> Message-ID: <87mvlcx8fd.fsf@smart-cactus.org> Ben Gamari writes: > [ Unknown signature status ] > > Hello everyone, > > While picking up the pieces from a failed merge today I realized that we > currently spend a fair bit of carbon footprint and CPU cycles making > Travis test GHC yet the results of these tests aren't pushed anywhere. > > Would anyone object to having Travis push notifications of changes in > red/green state to ghc-builds at haskell.org? Perhaps this will allow some > of us to react more quickly to regressions. > Actually Thomas points out that we indeed used to do this and yet stopped because it meant that users would fork the repository, enable Travis build on their fork, and then inadvertantly spam the list. So, perhaps we shouldn't do this. - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Thu Jul 21 02:13:22 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 20 Jul 2016 22:13:22 -0400 Subject: Proposal process status In-Reply-To: <874m7k1d0t.fsf@smart-cactus.org> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> Message-ID: On Wednesday, July 20, 2016, Ben Gamari wrote: > Yuras Shumovich > writes: > > > Looks like reddit is a wrong place, so I'm replicating my comment here: > > > Thanks for your comments Yuras! > > >> * Do you feel the proposed process is an improvement over the > >> status quo? > > > > Yes, definitely. The existing process is too vague, so formalizing it > > is a win in any case. > > > Good to hear. > > >> * What would you like to see changed in the proposed process, if > >> anything? > > > > The proposed process overlaps with the Language Committee powers. In > > theory the Committee works on language standard, but de facto Haskell > > is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC > > adds new extension to Haskell. So I'd like the process to enforce > > separation between experimental extensions (not recommended in > > production code) and language improvements. I'd like the process to > > specify how the GHC Committee is going to communicate and share powers > > with the Language Committee. > > > To clarify I think Language Committee here refers to the Haskell Prime > committee, right? > > I think these two bodies really do serve different purposes. > Historically the Haskell Prime committee has been quite conservative in > the sorts of changes that they standardized; as far as I know almost all > of them come from a compiler. I would imagine that the GHC Committee > would be a gate-keeper for proposals entering GHC and only some time > later, when the semantics and utility of the extension are > well-understood, would the Haskell Prime committee consider introducing > it to the Report. As far as I understand it, this is historically how > things have worked in the past, and I don't think this new process would > change that. > > Of course, let me know if I'm off-base here. As one of the 20 members of the Haskell (Prime) 2020 committee id like to interject on this front: the preliminary discussions the committee has had thus far had a clear agreement that we shall aim to be a bit more progressive about what shall be included in the standard. The main bar will be the extent to which features or capabilities can be articulated without over specifying implementation details and can tractably have compatible but different compilers for the standard. I think some of the other prime committee members can articulate this a bit better than I, so don't hold me to this precise phrasing ;) > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Jul 21 07:33:57 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 21 Jul 2016 09:33:57 +0200 Subject: Should we send Travis messages to ghc-builds? In-Reply-To: <87mvlcx8fd.fsf@smart-cactus.org> References: <8737n4ysfh.fsf@smart-cactus.org> <87mvlcx8fd.fsf@smart-cactus.org> Message-ID: <1469086437.3262.109.camel@joachim-breitner.de> Hi, Am Donnerstag, den 21.07.2016, 00:15 +0200 schrieb Ben Gamari: > Ben Gamari writes: > > > [ Unknown signature status ] > > > > Hello everyone, > > > > While picking up the pieces from a failed merge today I realized that we > > currently spend a fair bit of carbon footprint and CPU cycles making > > Travis test GHC yet the results of these tests aren't pushed anywhere. > > > > Would anyone object to having Travis push notifications of changes in > > red/green state to ghc-builds at haskell.org? Perhaps this will allow some > > of us to react more quickly to regressions. > > > Actually Thomas points out that we indeed used to do this and yet > stopped because it meant that users would fork the repository, enable > Travis build on their fork, and then inadvertantly spam the list. So, > perhaps we shouldn't do this. Yes, that is a problem. I still get failed build mails whenever Simon M. pushes to this 7.10.3-facebook branch. But should by default Travis notify the committer of a failed commit? Is that not sufficient? Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From shumovichy at gmail.com Thu Jul 21 12:20:12 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Thu, 21 Jul 2016 15:20:12 +0300 Subject: Should we send Travis messages to ghc-builds? In-Reply-To: <87mvlcx8fd.fsf@smart-cactus.org> References: <8737n4ysfh.fsf@smart-cactus.org> <87mvlcx8fd.fsf@smart-cactus.org> Message-ID: <1469103612.4633.6.camel@gmail.com> On Thu, 2016-07-21 at 00:15 +0200, Ben Gamari wrote: > Ben Gamari writes: > > > [ Unknown signature status ] > > > > Hello everyone, > > > > While picking up the pieces from a failed merge today I realized > > that we > > currently spend a fair bit of carbon footprint and CPU cycles > > making > > Travis test GHC yet the results of these tests aren't pushed > > anywhere. > > > > Would anyone object to having Travis push notifications of changes > > in > > red/green state to ghc-builds at haskell.org? Perhaps this will allow > > some > > of us to react more quickly to regressions. > > > Actually Thomas points out that we indeed used to do this and yet > stopped because it meant that users would fork the repository, enable > Travis build on their fork, and then inadvertantly spam the list. So, > perhaps we shouldn't do this. I think it could be controlled by an environment variable set in travis UI ( https://travis-ci.org/ghc/ghc/settings ). Repository fork will not clone the variables. Thanks, Yuras. > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From shumovichy at gmail.com Thu Jul 21 12:51:01 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Thu, 21 Jul 2016 15:51:01 +0300 Subject: Proposal process status In-Reply-To: <874m7k1d0t.fsf@smart-cactus.org> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> Message-ID: <1469105461.4633.21.camel@gmail.com> On Wed, 2016-07-20 at 18:37 +0200, Ben Gamari wrote: > Yuras Shumovich writes: > > > Looks like reddit is a wrong place, so I'm replicating my comment > > here: > > > Thanks for your comments Yuras! > > > >   * Do you feel the proposed process is an improvement over the > > > status quo? > > > > Yes, definitely. The existing process is too vague, so formalizing > > it > > is a win in any case. > > > Good to hear. > > > >   * What would you like to see changed in the proposed process, > > > if > > >     anything? > > > > The proposed process overlaps with the Language Committee powers. > > In > > theory the Committee works on language standard, but de facto > > Haskell > > is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to > > GHC > > adds new extension to Haskell. So I'd like the process to enforce > > separation between experimental extensions (not recommended in > > production code) and language improvements. I'd like the process to > > specify how the GHC Committee is going to communicate and share > > powers > > with the Language Committee. > > > To clarify I think Language Committee here refers to the Haskell > Prime > committee, right? Yes, Herbert used "Haskell Prime 2020 committee" and "Haskell Language committee" interchangeable in the original announcement https://mail.ha skell.org/pipermail/haskell-prime/2016-April/004050.html > > I think these two bodies really do serve different purposes. > Historically the Haskell Prime committee has been quite conservative > in > the sorts of changes that they standardized; as far as I know almost > all > of them come from a compiler. I would imagine that the GHC Committee > would be a gate-keeper for proposals entering GHC and only some time > later, when the semantics and utility of the extension are > well-understood, would the Haskell Prime committee consider > introducing > it to the Report. As far as I understand it, this is historically how > things have worked in the past, and I don't think this new process > would > change that. I think it is what the process should change. It makes sense to have two committees only if we have multiple language implementations, but it is not the case. Prime committee may accept or reject e.g. GADTs, but it will change nothing because people will continue using GADTs regardless, and any feature accepted by the Prime committee will necessary be compatible with GADTs extension. The difference between standard and GHC-specific extensions is just a question of formal specification, interesting mostly for language lawyer. (But it is good to have such formal specification even for GHC- specific extensions, right?) Probably it is time to return -fglasgow-exts back to separate standard feature from experimental GHC-specific ones. > > Of course, let me know if I'm off-base here. > > Cheers, > > - Ben From gershomb at gmail.com Thu Jul 21 14:32:18 2016 From: gershomb at gmail.com (Gershom B) Date: Thu, 21 Jul 2016 10:32:18 -0400 Subject: Proposal process status In-Reply-To: <1469105461.4633.21.camel@gmail.com> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> Message-ID: On July 21, 2016 at 8:51:15 AM, Yuras Shumovich (shumovichy at gmail.com) wrote: > > I think it is what the process should change. It makes sense to have > two committees only if we have multiple language implementations, but > it is not the case. Prime committee may accept or reject e.g. GADTs, > but it will change nothing because people will continue using GADTs > regardless, and any feature accepted by the Prime committee will > necessary be compatible with GADTs extension. I disagree. By the stated goals of the H2020 Committee, if it is successful, then by 2020 it will still for the most part have only standardized ony a _portion_ of the extentions that now exist today. There’s always been a barrier between implementation and standard in the Haskell language, that’s precisely one of the things that _keeps_ it from having become entirely implementation-defined despite the prevelance of extensions. Having two entirely different processes here (though obviously not without communication between the individuals involved) helps maintain that. —Gershom From rae at cs.brynmawr.edu Thu Jul 21 14:40:13 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 21 Jul 2016 10:40:13 -0400 Subject: Proposal process status In-Reply-To: References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> Message-ID: <2157B2C8-F46A-4FFC-9CFC-9CC033DF4034@cs.brynmawr.edu> > On Jul 21, 2016, at 10:32 AM, Gershom B wrote: > > On July 21, 2016 at 8:51:15 AM, Yuras Shumovich (shumovichy at gmail.com) wrote: >> >> It makes sense to have >> two committees only if we have multiple language implementations, but >> it is not the case. > I disagree. By the stated goals of the H2020 Committee, if it is successful, then by 2020 it will still for the most part have only standardized ony a _portion_ of the extentions that now exist today. +1 to Gershom's comment. From shumovichy at gmail.com Thu Jul 21 15:29:21 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Thu, 21 Jul 2016 18:29:21 +0300 Subject: Proposal process status In-Reply-To: References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> Message-ID: <1469114961.4633.48.camel@gmail.com> On Thu, 2016-07-21 at 10:32 -0400, Gershom B wrote: > On July 21, 2016 at 8:51:15 AM, Yuras Shumovich (shumovichy at gmail.com > ) wrote: > > > > I think it is what the process should change. It makes sense to > > have > > two committees only if we have multiple language implementations, > > but > > it is not the case. Prime committee may accept or reject e.g. > > GADTs, > > but it will change nothing because people will continue using GADTs > > regardless, and any feature accepted by the Prime committee will > > necessary be compatible with GADTs extension. > > I disagree. By the stated goals of the H2020 Committee, if it is > successful, then by 2020 it will still for the most part have only > standardized ony a _portion_ of the extentions that now exist today. Yes, I know. But don't you see how narrow the responsibility of the H2020 Committee is? GHC Committee makes all important decisions, and H2020 just collects some of GHC extensions into a set of "standard" ones. It is useful only when "nonstandard" extensions are not widely used (e.g. marked as experimental, and are not recommended for day-to- day use). > > There’s always been a barrier between implementation and standard in > the Haskell language, that’s precisely one of the things that _keeps_ > it from having become entirely implementation-defined despite the > prevelance of extensions. Unfortunately Haskell *is* implementation-defined language. You can't compile any nontrivial package from Hackage using Haskell2010 GHC. And the same will be true for Haskell2020. We rely on GHC-specific extensions everywhere, directly or indirectly. If the goal of the Haskell Prime is to change that, then the GHC-specific extensions should not be first class citizens in the ecosystem. Otherwise there is no sense in two committees. We can continue pretending that Haskell is standard-defined language, but it will not help to change the situation.  > > Having two entirely different processes here (though obviously not > without communication between the individuals involved) helps > maintain that. > > —Gershom > > From rae at cs.brynmawr.edu Thu Jul 21 17:25:58 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 21 Jul 2016 13:25:58 -0400 Subject: Proposal process status In-Reply-To: <1469114961.4633.48.camel@gmail.com> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> <1469114961.4633.48.camel@gmail.com> Message-ID: > On Jul 21, 2016, at 11:29 AM, Yuras Shumovich wrote: > > Unfortunately Haskell *is* implementation-defined language. You can't > compile any nontrivial package from Hackage using Haskell2010 GHC. Sadly, I agree with this statement. And I think this is what we're trying to change. > And > the same will be true for Haskell2020. We rely on GHC-specific > extensions everywhere, directly or indirectly. If the goal of the > Haskell Prime is to change that, then the GHC-specific extensions > should not be first class citizens in the ecosystem. My hope is that Haskell2020 will allow us to differentiate between standardized extensions and implementation-defined ones. A key part of this hope is that we'll get enough extensions in the first set to allow a sizeable portion of our ecosystem to used only standardized extensions. > > We can continue pretending that Haskell is standard-defined language, > but it will not help to change the situation. But writing a new standard that encompasses prevalent usage will help to change the situation. And that's the process I'm hoping to contribute to. Richard From shumovichy at gmail.com Thu Jul 21 18:25:53 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Thu, 21 Jul 2016 21:25:53 +0300 Subject: Proposal process status In-Reply-To: References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> <1469114961.4633.48.camel@gmail.com> Message-ID: <1469125553.4633.68.camel@gmail.com> On Thu, 2016-07-21 at 13:25 -0400, Richard Eisenberg wrote: > > > > On Jul 21, 2016, at 11:29 AM, Yuras Shumovich > > wrote: > > > > Unfortunately Haskell *is* implementation-defined language. You > > can't > > compile any nontrivial package from Hackage using Haskell2010 GHC. > > Sadly, I agree with this statement. And I think this is what we're > trying to change. And I'd like it to be changed too. I'm paid for writing SW in Haskell, and I want to have a standard. At the same time I'm (probably unusual) Haskell fan, so I want new cool features. Don't you see a conflict of interests? https://www.reddit.com/r/haskell/comments/4oyxo2/blog_contributing_to_ghc/d4iaz5t > > > And > > the same will be true for Haskell2020. We rely on GHC-specific > > extensions everywhere, directly or indirectly. If the goal of the > > Haskell Prime is to change that, then the GHC-specific extensions > > should not be first class citizens in the ecosystem. > > My hope is that Haskell2020 will allow us to differentiate between > standardized extensions and implementation-defined ones. A key part > of this hope is that we'll get enough extensions in the first set to > allow a sizeable portion of our ecosystem to used only standardized > extensions. It is hopeless. Haskell2020 will not include TemplateHaskell, GADTs, etc. Haskell Prime committee will never catch up if GHC will continue adding new extensions. In 2020 everybody will use pattern synonyms, overloaded record fields and TypeInType, so the standard will be as far from practice as it is now. The whole idea of language extensions, as it is right now, works against Haskell Prime. https://www.reddit.com/r/haskell/comments/46jq4i/what_is_the_eventual_fate_of_all_of_these_ghc/d05q9no I abandoned my CStructures proposal because of that. I don't want to increase entropy. https://phabricator.haskell.org/D252 > > > > We can continue pretending that Haskell is standard-defined > > language, > > but it will not help to change the situation.  > > But writing a new standard that encompasses prevalent usage will help > to change the situation. And that's the process I'm hoping to > contribute to. I see only one real way to change the situation -- standardize all widely used extensions and declare anything new as experimental unless accepted by the Haskell Prime Committee. Probably there are other ways, but we need to cleanup the mess ASAP. New extensions only contribute to the mess -- that is my point. > > Richard From rae at cs.brynmawr.edu Thu Jul 21 18:38:56 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 21 Jul 2016 14:38:56 -0400 Subject: Proposal process status In-Reply-To: <1469125553.4633.68.camel@gmail.com> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> <1469114961.4633.48.camel@gmail.com> <1469125553.4633.68.camel@gmail.com> Message-ID: <2BDA9BCD-85CE-4989-9DB8-C425B5C9514D@cs.brynmawr.edu> > On Jul 21, 2016, at 2:25 PM, Yuras Shumovich wrote: > > It is hopeless. Haskell2020 will not include TemplateHaskell, GADTs, > etc. Why do you say this? I don't think this is a forgone conclusion. I'd love to see these standardized. My own 2¢ on these are that we can standardize some subset of TemplateHaskell quite easily. GADTs are harder because (to my knowledge) no one has ever written a specification of type inference for GADTs. (Note that the OutsideIn paper admits to failing at this.) Perhaps we can nail it, but perhaps not. Even so, we can perhaps standardize much of the behavior around GADTs (but with pattern matches requiring lots of type annotations) and say that an implementation is free to do better. Maybe we can do even better than this, but I doubt we'll totally ignore this issue. > Haskell Prime committee will never catch up if GHC will continue > adding new extensions. Of course not. But I believe some libraries also refrain from using new extensions for precisely the same reason -- that the new extensions have yet to fully gel. > In 2020 everybody will use pattern synonyms, > overloaded record fields and TypeInType, so the standard will be as far > from practice as it is now. Pattern synonyms, now with a published paper behind them, may actually be in good enough shape to standardize by 2020. I don't know anything about overloaded record fields. I'd be shocked if TypeInType is ready to standardize by 2020. But hopefully we'll get to it. > > The whole idea of language extensions, as it is right now, works > against Haskell Prime. I heartily disagree here. Ideas that are now standard had to have started somewhere, and I really like (in theory) the way GHC/Haskell does this. The (in theory) parenthetical is because the standardization process has been too, well, dead to be useful. Is that changing? Perhaps. I'd love to see more action on that front. I'm hoping to take on a more active role in the committee after my dissertation is out the door (2 more weeks!). > > I see only one real way to change the situation -- standardize all > widely used extensions and declare anything new as experimental unless > accepted by the Haskell Prime Committee. Agreed here. I think that's what we're trying to do. If you have a good specification for GADT type inference, that would help us. :) Richard From spam at scientician.net Thu Jul 21 19:45:59 2016 From: spam at scientician.net (Bardur Arantsson) Date: Thu, 21 Jul 2016 21:45:59 +0200 Subject: Proposal process status In-Reply-To: <2BDA9BCD-85CE-4989-9DB8-C425B5C9514D@cs.brynmawr.edu> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> <1469114961.4633.48.camel@gmail.com> <1469125553.4633.68.camel@gmail.com> <2BDA9BCD-85CE-4989-9DB8-C425B5C9514D@cs.brynmawr.edu> Message-ID: On 07/21/2016 08:38 PM, Richard Eisenberg wrote: > >> On Jul 21, 2016, at 2:25 PM, Yuras Shumovich wrote: >> [--snip--] >> Haskell Prime committee will never catch up if GHC will continue >> adding new extensions. > > Of course not. But I believe some libraries also refrain from using new extensions for precisely the same reason -- that the new extensions have yet to fully gel. > Indeed, a major issue right now is that Haskell-as-practiced is *sooo* far from Haskell-as-standardized (H2010) that it's basically hopeless to implement most non-trivial things using only H2010. We're not even talking missing very advanced things, just "basic" things like MPTCs, ScopedTypeVariables not being the standard behavior, various derivations, auto-Typeable, TypeFams vs. FunDeps[1], plus various minor syntactic conveniences[2], BangPatterns, etc. etc. Just FTR: Of course, I realize that standardizing any of this is *much* easier said than done, so this is NOT meant as a slight against anyone! Regards, [1] Alright, this one might be a little more contentious, but the basic functionality of FunDeps (or the equivalent functionality form TypeFams) is sometimes necessary. [2] These I can and will work around if necessary, but as long as something else in my package requires anything non-Haskell2010, why should I bother? From shumovichy at gmail.com Thu Jul 21 19:52:23 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Thu, 21 Jul 2016 22:52:23 +0300 Subject: Proposal process status In-Reply-To: <2BDA9BCD-85CE-4989-9DB8-C425B5C9514D@cs.brynmawr.edu> References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> <1469114961.4633.48.camel@gmail.com> <1469125553.4633.68.camel@gmail.com> <2BDA9BCD-85CE-4989-9DB8-C425B5C9514D@cs.brynmawr.edu> Message-ID: <1469130743.4633.95.camel@gmail.com> On Thu, 2016-07-21 at 14:38 -0400, Richard Eisenberg wrote: > > > > On Jul 21, 2016, at 2:25 PM, Yuras Shumovich > > wrote: > > > > It is hopeless. Haskell2020 will not include TemplateHaskell, > > GADTs, > > etc. > > Why do you say this? I don't think this is a forgone conclusion. I'd > love to see these standardized. Because I'm a pessimist :) We even can't agree to add `text` to the standard library. > > My own 2¢ on these are that we can standardize some subset of > TemplateHaskell quite easily. GADTs are harder because (to my > knowledge) no one has ever written a specification of type inference > for GADTs. (Note that the OutsideIn paper admits to failing at this.) > Perhaps we can nail it, but perhaps not. Even so, we can perhaps > standardize much of the behavior around GADTs (but with pattern > matches requiring lots of type annotations) and say that an > implementation is free to do better. Maybe we can do even better than > this, but I doubt we'll totally ignore this issue. > > > Haskell Prime committee will never catch up if GHC will continue > > adding new extensions. > > Of course not. But I believe some libraries also refrain from using > new extensions for precisely the same reason -- that the new > extensions have yet to fully gel. And you are an optimist. We are lazy, so we'll use whatever is convenient. There are three ways to force people to refrain from using new extensions: - mature alternative compiler exists, so nobody will use your library unless it uses only the common subset of features; - the standard covers all usual needs (I don't think it will be possible in near future, and existence of this email thread proves that.) - new features are not first class citizens; e.g. `cabal check` issues an error (or warning) when you are uploading a package with immature extension used. > > > In 2020 everybody will use pattern synonyms, > > overloaded record fields and TypeInType, so the standard will be as > > far > > from practice as it is now. > > Pattern synonyms, now with a published paper behind them, may > actually be in good enough shape to standardize by 2020. I don't know > anything about overloaded record fields. I'd be shocked if TypeInType > is ready to standardize by 2020. But hopefully we'll get to it. > > > > > The whole idea of language extensions, as it is right now, works > > against Haskell Prime. > > I heartily disagree here. Ideas that are now standard had to have > started somewhere, and I really like (in theory) the way GHC/Haskell > does this. I'm not against language extensions completely. But using them should be a real pain to prevent people from using then everywhere. Ideally you should have to compile GHC manually to get a particular extension enabled :) > > The (in theory) parenthetical is because the standardization process > has been too, well, dead to be useful. Is that changing? Perhaps. I'd > love to see more action on that front. I'm hoping to take on a more > active role in the committee after my dissertation is out the door (2 > more weeks!). > > > > I see only one real way to change the situation -- standardize all > > widely used extensions and declare anything new as experimental > > unless > > accepted by the Haskell Prime Committee. > > Agreed here. Great. So I propose to split section "9. GHC Language Features" of the user manual into "Stable language extensions" and "Experimental language extensions", move all the recently added extensions into the latter one, explicitly state in the proposed process that all new extensions go to the "Experimental" subsection initially and specify when they go to the "Stable" subsection. > I think that's what we're trying to do. If you have a good > specification for GADT type inference, that would help us. :) I'd personally prefer to mark GADT and TH as experimental. The difficulties with their standardizing is a sign of immaturity. I regret about each time I used them in production code. > > Richard From mle+hs at mega-nerd.com Thu Jul 21 20:59:31 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Fri, 22 Jul 2016 06:59:31 +1000 Subject: GHC build currently broken on OS X and Windows Message-ID: <20160722065931.c3f5a56b5ca7ae7d6337b8cf@mega-nerd.com> Hi all. The recent Compact Regions commit (cf989ffe490c146be4ed0fd7e0c00d3ff8fe1453) builds fine on Linux but doesn't build on OS X and is at least 99% certain not to build on Windows. I'm in the process of fixing it, but it will be 24-36 hours before I have a patch ready. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From karel.gardas at centrum.cz Thu Jul 21 21:02:25 2016 From: karel.gardas at centrum.cz (Karel Gardas) Date: Thu, 21 Jul 2016 23:02:25 +0200 Subject: GHC build currently broken on OS X and Windows In-Reply-To: <20160722065931.c3f5a56b5ca7ae7d6337b8cf@mega-nerd.com> References: <20160722065931.c3f5a56b5ca7ae7d6337b8cf@mega-nerd.com> Message-ID: <57913861.1030008@centrum.cz> Hi, side note, it's also broken by cac3fb06f4b282eee21159c364c4d08e8fdedce9 on Solaris. I'll hopefully get to it during the weekend. Karel On 07/21/16 10:59 PM, Erik de Castro Lopo wrote: > Hi all. > > The recent Compact Regions commit (cf989ffe490c146be4ed0fd7e0c00d3ff8fe1453) > builds fine on Linux but doesn't build on OS X and is at least 99% certain > not to build on Windows. > > I'm in the process of fixing it, but it will be 24-36 hours before I have > a patch ready. > > Erik > From marlowsd at gmail.com Thu Jul 21 21:13:13 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 21 Jul 2016 22:13:13 +0100 Subject: GHC build currently broken on OS X and Windows In-Reply-To: <20160722065931.c3f5a56b5ca7ae7d6337b8cf@mega-nerd.com> References: <20160722065931.c3f5a56b5ca7ae7d6337b8cf@mega-nerd.com> Message-ID: Thanks Erik, I'm happy to revert if this is a problem for folks in the meantime. On 21 July 2016 at 21:59, Erik de Castro Lopo wrote: > Hi all. > > The recent Compact Regions commit > (cf989ffe490c146be4ed0fd7e0c00d3ff8fe1453) > builds fine on Linux but doesn't build on OS X and is at least 99% certain > not to build on Windows. > > I'm in the process of fixing it, but it will be 24-36 hours before I have > a patch ready. > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Fri Jul 22 00:18:13 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Fri, 22 Jul 2016 08:18:13 +0800 Subject: GHC build currently broken on OS X and Windows In-Reply-To: <57913861.1030008@centrum.cz> References: <20160722065931.c3f5a56b5ca7ae7d6337b8cf@mega-nerd.com> <57913861.1030008@centrum.cz> Message-ID: Sorry about that :( > On Jul 22, 2016, at 5:02 AM, Karel Gardas wrote: > > > Hi, > > side note, it's also broken by cac3fb06f4b282eee21159c364c4d08e8fdedce9 on Solaris. I'll hopefully get to it during the weekend. > > Karel > > On 07/21/16 10:59 PM, Erik de Castro Lopo wrote: >> Hi all. >> >> The recent Compact Regions commit (cf989ffe490c146be4ed0fd7e0c00d3ff8fe1453) >> builds fine on Linux but doesn't build on OS X and is at least 99% certain >> not to build on Windows. >> >> I'm in the process of fixing it, but it will be 24-36 hours before I have >> a patch ready. >> >> Erik >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Fri Jul 22 07:50:31 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 22 Jul 2016 09:50:31 +0200 Subject: GHC build currently broken on OS X and Windows In-Reply-To: <57913861.1030008@centrum.cz> References: <20160722065931.c3f5a56b5ca7ae7d6337b8cf@mega-nerd.com> <57913861.1030008@centrum.cz> Message-ID: <87vazyw1q0.fsf@smart-cactus.org> Karel Gardas writes: > Hi, > > side note, it's also broken by cac3fb06f4b282eee21159c364c4d08e8fdedce9 > on Solaris. I'll hopefully get to it during the weekend. I've gone ahead and reverted this for now (although I now realize that I botched the commit message; oh well). Karel or Moritz, feel free to push a new version when you are satisfied it works. Thanks! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From lonetiger at gmail.com Fri Jul 22 17:02:35 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 22 Jul 2016 18:02:35 +0100 Subject: Proposal process status In-Reply-To: References: <87mvlc1wi0.fsf@smart-cactus.org> <1469014951.4633.2.camel@gmail.com> <874m7k1d0t.fsf@smart-cactus.org> <1469105461.4633.21.camel@gmail.com> <1469114961.4633.48.camel@gmail.com> Message-ID: <579251ae.8aacc20a.ff124.a546@mx.google.com> +1 from me for keeping the two separate as well. While GHC may be the obviously prevalent Haskell compiler it is a far from the only one, And I’d hate to have to look at a proposal for adding an extension to GHC (which would be riddled with GHC specific implementation specifics) Rather than a clean specification. Maybe I’m naïve but I also see the Haskell committees as doing more than just copy pasting what’s worked. But also evaluate how it can be done Better. I can perfectly well see situations where the implementation in GHC ended up being less useful than It should be just because of Implementation quirks/difficulties in GHC. Cheers, Tamar From: Richard Eisenberg -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Mon Jul 25 10:14:39 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 25 Jul 2016 10:14:39 +0000 Subject: Supporting unboxed tuples in the bytecode compiler Message-ID: Simon, I was looking at the bytecode compiler to understand what's needed to support unboxed tuples. It seems like if we generate bytecode after unarise it should be very easy to support unboxed tuples, because after unarise we don't have any unboxed tuple binders (all binders have UnaryType). So only places we see unboxed tuples are: - Return positions. We just push contents of the tuple to the stack. - Case alternatives. The case expression in this case has to have this form: case e1 of (# bndr1, bndr2, ..., bndrN #) -> RHS All binders will have unary types again. We just bind ids in the environment to their stack locations and compile RHS. I think that's it. We also get unboxed sums support for free when we do this after unarise. What do you think about compiling to bytecode from STG? Have you considered that before? Would that be a problem for GHCi's debugger or any other features? From marlowsd at gmail.com Mon Jul 25 13:07:09 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 25 Jul 2016 14:07:09 +0100 Subject: Supporting unboxed tuples in the bytecode compiler In-Reply-To: References: Message-ID: If I remember correctly, it's returning and case alternatives that are the problem (unarise is relatively new, for a long time you could only use unboxed tuples in returns and case alternatives). Take your case expression example: case e1 of (# bndr1, bndr2, ..., bndrN #) -> RHS Now suppose that e1 is a call to a compiled function, and the RHS is byte code. Normally when this happens we push a stack frame that will capture the values returned by the compiled code, save them on the stack, and pass control to the interpreter. These are the stg_ctoi_XXX frames in StgMiscClosures.cmm. The problem with unboxed tuples is that we would need an infinite number of these to handle all the possible return conventions for unboxed tuples. Since we can't use pre-compiled stack frames, we would need to use a scheme that involves some separate description of the stack, using a bitmap of some kind. I think this should be possible, but there isn't an existing stack frame that does the right thing - the closest is RET_BCO, but then you'd have to make another BCO just for the purpose of containing the bitmap to describe the stack layout. And you somehow have to extract the values that are being returned in registers and save them on the stack too. The other half of the problem is returning an arbitrary unboxed tuple to compiled code, where we would have to arrange that the correct values get put in the correct registers according to the return convention, and here we would need some kind of descriptor and an interpretive loop in Cmm to do the loading of values from the stack into registers. So I think it's possible, but there are quite a lot of fiddly details. Cheers Simon On 25 July 2016 at 11:14, Ömer Sinan Ağacan wrote: > Simon, > > I was looking at the bytecode compiler to understand what's needed to > support > unboxed tuples. It seems like if we generate bytecode after unarise it > should be > very easy to support unboxed tuples, because after unarise we don't have > any > unboxed tuple binders (all binders have UnaryType). So only places we see > unboxed tuples are: > > - Return positions. We just push contents of the tuple to the stack. > > - Case alternatives. The case expression in this case has to have this > form: > > case e1 of > (# bndr1, bndr2, ..., bndrN #) -> RHS > > All binders will have unary types again. We just bind ids in the > environment to their stack locations and compile RHS. > > I think that's it. We also get unboxed sums support for free when we do > this > after unarise. > > What do you think about compiling to bytecode from STG? Have you > considered that > before? Would that be a problem for GHCi's debugger or any other features? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma at bitemyapp.com Tue Jul 26 01:15:33 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Mon, 25 Jul 2016 20:15:33 -0500 Subject: Re-exporting traverse_ from Data.Traversable Message-ID: Any reason not to do it? I realize it needs Foldable, but even knowing that I still forget it's in Data.Foldable. Seems like a free UX win to me. From ekmett at gmail.com Tue Jul 26 02:04:36 2016 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 25 Jul 2016 22:04:36 -0400 Subject: Re-exporting traverse_ from Data.Traversable In-Reply-To: References: Message-ID: Yes there is. For existing code if anybody has already explicitly hidden it from Data.Foldable, so they can work with traverse_ from some specific container type, they'd now get slapped with it on the backswing by Data.Traversable. import Data.Foldable hiding (traverse_) import SomeContainer import Data.Traversable main = something that used a monomorphic traverse_ from SomeContainer This is sort of like how users get hammered with Control.Monad methods from all of the Control.Monad.Foo modules today, where it can be remarkably hard to hide all the attempts they make to shove the same fail, join, etc. down your throat. -Edward On Mon, Jul 25, 2016 at 9:15 PM, Christopher Allen wrote: > Any reason not to do it? I realize it needs Foldable, but even knowing > that I still forget it's in Data.Foldable. Seems like a free UX win to > me. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Jul 27 09:15:16 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 27 Jul 2016 09:15:16 +0000 Subject: atomicModifyMutVar#: cas() is not inlined Message-ID: This is from definition of stg_atomicModifyMutVarzh(): (for threaded runtime) retry: x = StgMutVar_var(mv); StgThunk_payload(z,1) = x; (h) = ccall cas(mv + SIZEOF_StgHeader + OFFSET_StgMutVar_var, x, y); if (h != x) { goto retry; } cas() is defined in includes/stg/SMP.h like this: EXTERN_INLINE StgWord cas(StgVolatilePtr p, StgWord o, StgWord n) { return __sync_val_compare_and_swap(p, o, n); } I think this is a function we want to make sure to inline everywhere, right? It's compiled to a single instruction on my x86_64 Linux laptop. >>> disassemble cas Dump of assembler code for function cas: 0x0000000000027240 <+0>: mov %rsi,%rax 0x0000000000027243 <+3>: lock cmpxchg %rdx,(%rdi) 0x0000000000027248 <+8>: retq End of assembler dump. But it seems like it's not really inlined in Cmm functions: >>> disassemble stg_atomicModifyMutVarzh Dump of assembler code for function stg_atomicModifyMutVarzh: ... 0x0000000000046738 <+120>: callq 0x27240 ... End of assembler dump. I guess the problem is that we can't inline C code in Cmm, but I was wondering if this is important enough to try to fix maybe. Has anyone here looked at some profiling info to see how much time spent on this cas() call when threads are blocked in `atomicModifyIORef` etc? From omeragacan at gmail.com Wed Jul 27 22:28:03 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 27 Jul 2016 22:28:03 +0000 Subject: atomicModifyMutVar#: cas() is not inlined In-Reply-To: References: Message-ID: To keep this thread up-to-date: https://phabricator.haskell.org/D2431 2016-07-27 14:08 GMT+00:00 Alex Biehl : > There already is the CallishMachOp MO_cmpxchg in > https://github.com/ghc/ghc/blob/5d98b8bf249fab9bb0be6c5d4e8ddd4578994abb/compiler/cmm/CmmMachOp.hs#L587 > > All is left todo would be to add it to CmmParse.y (which has a TODO comment > https://github.com/ghc/ghc/blob/714bebff44076061d0a719c4eda2cfd213b7ac3d/compiler/cmm/CmmParse.y#L992) > then you could use that instead of the ccall. > > Ömer Sinan Ağacan schrieb am Mi., 27. Juli 2016 um > 11:15 Uhr: >> >> This is from definition of stg_atomicModifyMutVarzh(): (for threaded >> runtime) >> >> retry: >> x = StgMutVar_var(mv); >> StgThunk_payload(z,1) = x; >> (h) = ccall cas(mv + SIZEOF_StgHeader + OFFSET_StgMutVar_var, x, y); >> if (h != x) { goto retry; } >> >> cas() is defined in includes/stg/SMP.h like this: >> >> EXTERN_INLINE StgWord >> cas(StgVolatilePtr p, StgWord o, StgWord n) >> { >> return __sync_val_compare_and_swap(p, o, n); >> } >> >> I think this is a function we want to make sure to inline everywhere, >> right? >> It's compiled to a single instruction on my x86_64 Linux laptop. >> >> >>> disassemble cas >> Dump of assembler code for function cas: >> 0x0000000000027240 <+0>: mov %rsi,%rax >> 0x0000000000027243 <+3>: lock cmpxchg %rdx,(%rdi) >> 0x0000000000027248 <+8>: retq >> End of assembler dump. >> >> But it seems like it's not really inlined in Cmm functions: >> >> >>> disassemble stg_atomicModifyMutVarzh >> Dump of assembler code for function stg_atomicModifyMutVarzh: >> ... >> 0x0000000000046738 <+120>: callq 0x27240 >> ... >> End of assembler dump. >> >> I guess the problem is that we can't inline C code in Cmm, but I was >> wondering >> if this is important enough to try to fix maybe. Has anyone here looked at >> some >> profiling info to see how much time spent on this cas() call when threads >> are >> blocked in `atomicModifyIORef` etc? >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chak at justtesting.org Thu Jul 28 02:22:53 2016 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Thu, 28 Jul 2016 12:22:53 +1000 Subject: macOS 10.12 Message-ID: <17953970-4DEA-41D1-BEFA-6BEB0559D9AD@justtesting.org> Has anybody tried compiling GHC with the (beta) macOS 10.12 Sierra SDK? I just gave it a shot trying to compile GHC 7.8.3 and ran into a problem with GetTime.c in the RTS. (I haven’t tried GHC 8.0.1.) Manuel From carter.schonwald at gmail.com Thu Jul 28 21:13:28 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 28 Jul 2016 17:13:28 -0400 Subject: macOS 10.12 In-Reply-To: <17953970-4DEA-41D1-BEFA-6BEB0559D9AD@justtesting.org> References: <17953970-4DEA-41D1-BEFA-6BEB0559D9AD@justtesting.org> Message-ID: Could you share your error messages ? On Wednesday, July 27, 2016, Manuel M T Chakravarty wrote: > Has anybody tried compiling GHC with the (beta) macOS 10.12 Sierra SDK? I > just gave it a shot trying to compile GHC 7.8.3 and ran into a problem with > GetTime.c in the RTS. (I haven’t tried GHC 8.0.1.) > > Manuel > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Thu Jul 28 21:28:51 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 28 Jul 2016 17:28:51 -0400 Subject: macOS 10.12 In-Reply-To: References: <17953970-4DEA-41D1-BEFA-6BEB0559D9AD@justtesting.org> Message-ID: fwiw I suspect this is http://git.haskell.org/ghc.git/commit/a0f1809742160ca0c07778f91f3e2a8ea147c0a4 On Thu, Jul 28, 2016 at 5:13 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > Could you share your error messages ? > > > On Wednesday, July 27, 2016, Manuel M T Chakravarty > wrote: > >> Has anybody tried compiling GHC with the (beta) macOS 10.12 Sierra SDK? I >> just gave it a shot trying to compile GHC 7.8.3 and ran into a problem with >> GetTime.c in the RTS. (I haven’t tried GHC 8.0.1.) >> >> Manuel >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at justtesting.org Fri Jul 29 00:58:41 2016 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Fri, 29 Jul 2016 10:58:41 +1000 Subject: macOS 10.12 In-Reply-To: References: <17953970-4DEA-41D1-BEFA-6BEB0559D9AD@justtesting.org> Message-ID: Yes, it is exactly the issue Brandon references. Thanks. I have got one concern with this fix, though: doesn’t that mean that a GHC *build* on macOS 10.12 will not run on earlier versions of macOS? In the meantime, I found, https://github.com/Homebrew/homebrew-core/issues/1957#issuecomment-226328001 which explains the issue. Now, when you compile on macOS 10.12 with this fix, GHC RTS will use ’clock_gettime’. However, that symbol is not available in the system libraries of earlier version of macOS, which will lead to a dyld failure when trying to run the executable. In fact, given this is in the RTS, any Haskell program compiled with such as build of GHC would be unable to run on macOS versions older than 10.12 (unless I am mistaken). Has this been considered? Manuel > Brandon Allbery : > > fwiw I suspect this is http://git.haskell.org/ghc.git/commit/a0f1809742160ca0c07778f91f3e2a8ea147c0a4 > > On Thu, Jul 28, 2016 at 5:13 PM, Carter Schonwald > wrote: > Could you share your error messages ? > > > On Wednesday, July 27, 2016, Manuel M T Chakravarty > wrote: > Has anybody tried compiling GHC with the (beta) macOS 10.12 Sierra SDK? I just gave it a shot trying to compile GHC 7.8.3 and ran into a problem with GetTime.c in the RTS. (I haven’t tried GHC 8.0.1.) > > Manuel > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org <> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > -- > brandon s allbery kf8nh sine nomine associates > allbery.b at gmail.com ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at justtesting.org Fri Jul 29 07:32:13 2016 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Fri, 29 Jul 2016 17:32:13 +1000 Subject: macOS 10.12 In-Reply-To: References: <17953970-4DEA-41D1-BEFA-6BEB0559D9AD@justtesting.org> Message-ID: BTW, here is an alternative fix (against 7.10.3) that uses the minimum deployment target to determine whether to use the new ’clock_gettime()’ on macOS to avoid the problem I mentioned in my previous message: https://github.com/mchakravarty/ghc/commit/da87a6551a528d51dfd4277c4468c57d5039ab59 (So far only tested by compiling against the macOS 10.12 SDK w/ Xcode 8 and running it on macOS 10.11.) Manuel > Manuel M T Chakravarty : > > Yes, it is exactly the issue Brandon references. Thanks. > > I have got one concern with this fix, though: doesn’t that mean that a GHC *build* on macOS 10.12 will not run on earlier versions of macOS? > > In the meantime, I found, > > https://github.com/Homebrew/homebrew-core/issues/1957#issuecomment-226328001 > > which explains the issue. Now, when you compile on macOS 10.12 with this fix, GHC RTS will use ’clock_gettime’. However, that symbol is not available in the system libraries of earlier version of macOS, which will lead to a dyld failure when trying to run the executable. > > In fact, given this is in the RTS, any Haskell program compiled with such as build of GHC would be unable to run on macOS versions older than 10.12 (unless I am mistaken). Has this been considered? > > Manuel > >> Brandon Allbery >: >> >> fwiw I suspect this is http://git.haskell.org/ghc.git/commit/a0f1809742160ca0c07778f91f3e2a8ea147c0a4 >> >> On Thu, Jul 28, 2016 at 5:13 PM, Carter Schonwald > wrote: >> Could you share your error messages ? >> >> >> On Wednesday, July 27, 2016, Manuel M T Chakravarty > wrote: >> Has anybody tried compiling GHC with the (beta) macOS 10.12 Sierra SDK? I just gave it a shot trying to compile GHC 7.8.3 and ran into a problem with GetTime.c in the RTS. (I haven’t tried GHC 8.0.1.) >> >> Manuel >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org <> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> >> >> -- >> brandon s allbery kf8nh sine nomine associates >> allbery.b at gmail.com ballbery at sinenomine.net >> unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: