[Haskell-cafe] exceeding resources with GHC compiling

Ben Franksen ben.franksen at online.de
Tue Apr 28 12:25:12 UTC 2020


Am 28.04.20 um 14:12 schrieb Henning Thielemann:
>> Today I ran again into a problem I had several times before: compiling
>> Cabal-3.2.* (the library) with ghc-8.2.2 and cabal with default options
>> (including jobs: $ncpu, though it actually used only one cpu) eats all
>> the memory on my machine (8GB, but I had a tor browser and another
>> browser and thunderbird running) so that it completely freezes (no
>> mouse, no keyboard). Had to reboot using sysrq escape hatch. Not funny.
>> I think this is due to use of ghc --make and some very large modules.
>> Thankfully memory use has improved with later ghc versions.
> 
> That's why I never use 'jobs: $ncpu' and also oppose to use this as the
> default setting.

Yeah, right. On the other hand, I had a job running that compiles and
runs all the tests for darcs for 5 different ghc versions. You want all
the cores to run at max in this case to get results before sunset!
(Darcs itself is not a small project and takes quite a while to compile
with optimizations. I also I did not expect cabal-3.2 to be re-built for
ghc-8.2, that was a mistake I made in the cabal file; if I had I would
have closely monitored memory use to be ready to kill it).

> [1] There are some modules in packages that frequently
> eat up all my resources, e.g. I know that building "Cabal the library"
> is such a package. I remember that I can save memory by aborting
> compilation and restart it. It seems that GHC may cache too much. But
> continuing an aborted compilation is not possible for imported packages
> when using 'cabal install'. Other packages contain big modules
> automatically created by Template Haskell.

Yes. In the past I also had difficulty with vector and with aeson.

Cheers
Ben



More information about the Haskell-Cafe mailing list