[Git][ghc/ghc][wip/gc/optimize] 19 commits: rts: Implement concurrent collection in the nonmoving collector

Ben Gamari gitlab at gitlab.haskell.org
Wed Jun 19 15:28:50 UTC 2019



Ben Gamari pushed to branch wip/gc/optimize at Glasgow Haskell Compiler / GHC


Commits:
395ab8d5 by Ben Gamari at 2019-06-19T15:23:46Z
rts: Implement concurrent collection in the nonmoving collector

This extends the non-moving collector to allow concurrent collection.

The full design of the collector implemented here is described in detail
in a technical note

    B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
    Compiler" (2018)

This extension involves the introduction of a capability-local
remembered set, known as the /update remembered set/, which tracks
objects which may no longer be visible to the collector due to mutation.
To maintain this remembered set we introduce a write barrier on
mutations which is enabled while a concurrent mark is underway.

The update remembered set representation is similar to that of the
nonmoving mark queue, being a chunked array of `MarkEntry`s. Each
`Capability` maintains a single accumulator chunk, which it flushed
when it (a) is filled, or (b) when the nonmoving collector enters its
post-mark synchronization phase.

While the write barrier touches a significant amount of code it is
conceptually straightforward: the mutator must ensure that the referee
of any pointer it overwrites is added to the update remembered set.
However, there are a few details:

 * In the case of objects with a dirty flag (e.g. `MVar`s) we can
   exploit the fact that only the *first* mutation requires a write
   barrier.

 * Weak references, as usual, complicate things. In particular, we must
   ensure that the referee of a weak object is marked if dereferenced by
   the mutator. For this we (unfortunately) must introduce a read
   barrier, as described in Note [Concurrent read barrier on deRefWeak#]
   (in `NonMovingMark.c`).

 * Stable names are also a bit tricky as described in Note [Sweeping
   stable names in the concurrent collector] (`NonMovingSweep.c`).

We take quite some pains to ensure that the high thread count often seen
in parallel Haskell applications doesn't affect pause times. To this end
we allow thread stacks to be marked either by the thread itself (when it
is executed or stack-underflows) or the concurrent mark thread (if the
thread owning the stack is never scheduled). There is a non-trivial
handshake to ensure that this happens without racing which is described
in Note [StgStack dirtiness flags and concurrent marking].

Co-Authored-by: Ömer Sinan Ağacan <omer at well-typed.com>

- - - - -
59894c90 by Ben Gamari at 2019-06-19T15:23:46Z
Nonmoving: Disable memory inventory with concurrent collection

- - - - -
f9a609b8 by Ben Gamari at 2019-06-19T15:24:16Z
Nonmoving: Allow aging and refactor static objects logic

This commit does two things:

 * Allow aging of objects during the preparatory minor GC
 * Refactor handling of static objects to avoid the use of a hashtable

- - - - -
2fadc5bb by Ben Gamari at 2019-06-19T15:24:16Z
Disable aging when doing deadlock detection GC

- - - - -
bbb0532f by Ben Gamari at 2019-06-19T15:24:16Z
More comments for aging

- - - - -
91b7397a by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Eliminate integer division in nonmovingBlockCount

Perf showed that the this single div was capturing up to 10% of samples
in nonmovingMark. However, the overwhelming majority of cases is looking
at small block sizes. These cases we can easily compute explicitly,
allowing the compiler to turn the division into a significantly more
efficient division-by-constant.

While the increase in source code looks scary, this all optimises down
to very nice looking assembler. At this point the only remaining
hotspots in nonmovingBlockCount are due to memory access.

- - - - -
69255fd3 by Ben Gamari at 2019-06-19T15:24:43Z
Allocate mark queues in larger block groups

- - - - -
0fa4699e by Ben Gamari at 2019-06-19T15:24:43Z
NonMovingMark: Optimize representation of mark queue

This shortens MarkQueueEntry by 30% (one word)

- - - - -
ce01e27c by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Optimize bitmap search during allocation

Use memchr instead of a open-coded loop. This is nearly twice as fast in
a synthetic benchmark.

- - - - -
0f1b2545 by Ben Gamari at 2019-06-19T15:24:43Z
rts: Add prefetch macros

- - - - -
068c0031 by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Prefetch when clearing bitmaps

Ensure that the bitmap of the segmentt that we will clear next is in
cache by the time we reach it.

- - - - -
6d0a1ba1 by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Inline nonmovingClearAllBitmaps

- - - - -
681ee157 by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Fuse sweep preparation into mark prep

- - - - -
0a2681ce by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Pre-fetch during mark

This improved overall runtime on nofib's constraints test by nearly 10%.

- - - - -
3e43b8ef by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Prefetch segment header

- - - - -
67215485 by Ben Gamari at 2019-06-19T15:24:43Z
NonMoving: Optimise allocator cache behavior

Previously we would look at the segment header to determine the block
size despite the fact that we already had the block size at hand.

- - - - -
a1a4c13a by Ben Gamari at 2019-06-19T15:24:43Z
NonMovingMark: Eliminate redundant check_in_nonmoving_heaps

- - - - -
c7ecb8a1 by Ben Gamari at 2019-06-19T15:24:44Z
NonMoving: Don't do major GC if one is already running

Previously we would perform a preparatory moving collection, resulting
in many things being added to the mark queue. When we finished with this
we would realize in nonmovingCollect that there was already a collection
running, in which case we would simply not run the nonmoving collector.

However, it was very easy to end up in a "treadmilling" situation: all
subsequent GC following the first failed major GC would be scheduled as
major GCs. Consequently we would continuously feed the concurrent
collector with more mark queue entries and it would never finish.

This patch aborts the major collection far earlier, meaning that we
avoid adding nonmoving objects to the mark queue and allowing the
concurrent collector to finish.

- - - - -
b1c0e777 by Ben Gamari at 2019-06-19T15:24:44Z
Nonmoving: Ensure write barrier vanishes in non-threaded RTS

- - - - -


30 changed files:

- compiler/cmm/CLabel.hs
- compiler/codeGen/StgCmmBind.hs
- compiler/codeGen/StgCmmPrim.hs
- compiler/codeGen/StgCmmUtils.hs
- includes/Cmm.h
- includes/Rts.h
- + includes/rts/NonMoving.h
- includes/rts/storage/ClosureMacros.h
- includes/rts/storage/GC.h
- includes/rts/storage/TSO.h
- includes/stg/MiscClosures.h
- rts/Apply.cmm
- rts/Capability.c
- rts/Capability.h
- rts/Exception.cmm
- rts/Messages.c
- rts/PrimOps.cmm
- rts/RaiseAsync.c
- rts/RtsStartup.c
- rts/RtsSymbols.c
- rts/STM.c
- rts/Schedule.c
- rts/StableName.c
- rts/ThreadPaused.c
- rts/Threads.c
- rts/Updates.h
- rts/sm/Evac.c
- rts/sm/GC.c
- rts/sm/GC.h
- rts/sm/GCAux.c


The diff was not included because it is too large.


View it on GitLab: https://gitlab.haskell.org/ghc/ghc/compare/a5cd845b0e7aaa67ef7321846e07bdfc4e266206...b1c0e77701efa7507dd5f4179cfa9d1edc78c71a

-- 
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/compare/a5cd845b0e7aaa67ef7321846e07bdfc4e266206...b1c0e77701efa7507dd5f4179cfa9d1edc78c71a
You're receiving this email because of your account on gitlab.haskell.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20190619/bb087d18/attachment-0001.html>


More information about the ghc-commits mailing list