[GHC] #11501: Building nofib/fibon returns permission denied

GHC ghc-devs at haskell.org
Fri Dec 23 17:00:54 UTC 2016


#11501: Building nofib/fibon returns permission denied
-------------------------------------+-------------------------------------
        Reporter:  rem               |                Owner:
            Type:  bug               |               Status:  new
        Priority:  normal            |            Milestone:
       Component:  NoFib benchmark   |              Version:  7.10.3
  suite                              |
      Resolution:                    |             Keywords:
Operating System:  Linux             |         Architecture:  x86_64
                                     |  (amd64)
 Type of failure:  None/Unknown      |            Test Case:
      Blocked By:                    |             Blocking:
 Related Tickets:                    |  Differential Rev(s):
       Wiki Page:                    |
-------------------------------------+-------------------------------------

Comment (by bgamari):

 I have also used `nofib` a great deal to locate past regressions in
 compiler performance regressions.

 Your observations are generally correct. I think it would be great to
 enable more tests by default. However, it's not at all clear to me that we
 necessarily always want to use `--make`. The current use of one-shot mode
 makes it significantly easier to narrow down what code in particular GHC
 has regressed on. Moving to `--make` may better reflect `Cabal`'s usage,
 but I think it would make the common case of locating compiler performance
 regressions a bit harder.

 In general I think there is certainly room for a broader performance
 testsuite consisting of a sampling of packages from Hackage. However, I'm
 not sure that `nofib` is that place. Keeping `nofib` a set of standalone
 tests without dependencies makes it a significantly smaller maintenance
 burden that the Cabal-centric approach that you describe. We as a
 community (myself included!) have had a poor record of following through
 with maintaining our performance infrastructure. Consequently I think
 there is value in keeping at least one testsuite which "just works" with
 minimal maintenance.

 I gave a talk at HIW this year describing a very rough tool that I
 developed while tracking down performance regressions in nofib using the
 performance build bot which nomeata stood up to server
 [[http://perf.haskell.org/|gipeda]]. It essentially provides the nice(?)
 graphs which you describe. Sadly, I've not had a chance to put it up in a
 publicly accessible location. In the meantime it can be found at
 http://home.smart-cactus.org/~ben/ghc-perf/. The source is available
 [[https://github.com/bgamari/ghc-perf-import/|here]].

 The general workflow is,
  1. navigate to http://home.smart-cactus.org/~ben/ghc-perf/
  2. stand in awe of my poor front-end design aesthetic
  3. select a test environment on the right-hand pane (e.g. my own build
 bot or nomeata's; the latter has better coverage).
  4. select a set of tests to plot from the left-hand pane. The tests list
 can be filtered by substring match using the text field at the top; I
 usually filter by `compile-allocs`. Loading the results may take a while,
 sadly.
  5. select a change threshold percentage such that commits which exhibit
 large changes in the selected tests show up in the commit list at the
 bottom

 This is how I typically find a starting point to dive into performance
 work. There are several types of tests captured,

  * Characterizing GHC's performance
    * `compile-allocs`: These are the compiler allocations for compiling
 each module of each test
    * `compile-time`: These are the compiler runtimes for compiling each
 module of each test
  * Compiled code characteristics
    * `binary-size`: This is the size of the produced executable
    * `module-size`: The size of each module's object file
  * Characterizing compiled code performance
    * `allocs`: This is the runtime allocations of each test
    * `gc-time`: The time spent in garbage collection while running the
 test
    * `elapsed-time`: The wall-clock time of the test run
    * `mut-elapsed-time`: The wall-clock time spent in the mutator
    * `mut-time`: CPU time spent in the mutator
    * `run-time`: The overall CPU time spent in the test

 Note that in general runtimes are very unstable. I find it most helpful to
 first identify potential regressions by looking at allocations, then look
 at runtime to see which allocations changes actually move real runtime.

--
Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/11501#comment:17>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler


More information about the ghc-tickets mailing list