Loading GHC into GHCi (and ghcid)
Bartosz Nitka
niteria at gmail.com
Thu Jun 7 21:48:38 UTC 2018
What version of GHC are you using?
There have been some significant improvements like
https://phabricator.haskell.org/rGHCb8fec6950ad99cbf11cd22698b8d5ab35afb828f,
that only just made it into GHC 8.4.
Some of them maybe haven't made it into a release yet.
You could try building
https://github.com/niteria/ghc/commits/ghc-8.0.2-facebook and see how
well it works for you.
Cheers,
Bartosz
czw., 7 cze 2018 o 23:26 Evan Laforge <qdunkan at gmail.com> napisaĆ(a):
>
> On Thu, Jun 7, 2018 at 1:47 PM, Simon Marlow <marlowsd at gmail.com> wrote:
> > For loading large amounts of code into GHCi, you want to add -j<n> +RTS
> > -A128m where <n> is the number of cores on your machine. We've found that
> > parallel compilation works really well in GHCi provided you use a nice large
> > allocation area for the GC. This dramatically speeds up working with large
> > numbers of modules in GHCi. (500 is small!)
>
> This is a bit of a thread hijack (feel free to change the subject),
> but I also have a workflow that involves loading a lot of modules in
> ghci (500-700). As long as I can coax ghci to load them, things are
> fast and work well, but my impression is that this isn't a common
> workflow, and specifically ghc developers don't do this, because just
> about every ghc release will break it in one way or another (e.g. by
> putting more flags in the recompile check hash), and no one seems to
> understand what I'm talking about when I suggest features to improve
> it (e.g. the recent msg about modtime and recompilation avoidance).
>
> Given the uphill battle, I've been thinking that linking most of those
> modules into a package and loading much fewer will be a better
> supported workflow. It's actually less convenient, because now it's
> divided between package level (which require a restart and relink if
> they change) and ghci level (which don't), but is maybe less likely to
> be broken by ghc changes. Also, all those loaded module consume a
> huge amount of memory, which I haven't tracked down yet, but maybe
> packages will load more efficiently.
>
> But ideally I would prefer to continue to not use packages, and in
> fact do per-module more aggressively for larger codebases, because the
> need to restart ghci (or the ghc API-using program) and do a lengthy
> relink every time a module in the "wrong place" changed seems like it
> could get annoying (in fact it already is, for a cabal-oriented
> workflow).
>
> Does the workflow at Facebook involve loading tons of individual
> modules as I do? Or do they get packed into packages? If it's the
> many modules, do you have recommendations making that work well and
> keeping it working? If packages are the way you're "supposed" to do
> things, then is there any idea about how hard it would be to reload
> packages at runtime? If both modules and packages can be reloaded, is
> there an intended conceptual difference between a package and an
> unpackaged collection of modules? To illustrate, I would put packages
> purely as a way to organize builds and distribution, and have no
> meaning at the compiler level, which is how I gather C compilers
> traditionally work (e.g. 'cc a.o b.o c.o' is the same as 'ar abc.a a.o
> b.o c.o; cc abc.a'). But that's clearly not how ghc sees it!
>
>
> thanks!
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
More information about the ghc-devs
mailing list