[Git][ghc/ghc][wip/gc/optimize] 21 commits: rts: Non-concurrent mark and sweep
Ben Gamari
gitlab at gitlab.haskell.org
Wed Jun 19 00:56:11 UTC 2019
Ben Gamari pushed to branch wip/gc/optimize at Glasgow Haskell Compiler / GHC
Commits:
b03ec7ea by Ömer Sinan Ağacan at 2019-06-19T00:52:36Z
rts: Non-concurrent mark and sweep
This implements the core heap structure and a serial mark/sweep
collector which can be used to manage the oldest-generation heap.
This is the first step towards a concurrent mark-and-sweep collector
aimed at low-latency applications.
The full design of the collector implemented here is described in detail
in a technical note
B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
Compiler" (2018)
The basic heap structure used in this design is heavily inspired by
K. Ueno & A. Ohori. "A fully concurrent garbage collector for
functional programs on multicore processors." /ACM SIGPLAN Notices/
Vol. 51. No. 9 (presented by ICFP 2016)
This design is intended to allow both marking and sweeping
concurrent to execution of a multi-core mutator. Unlike the Ueno design,
which requires no global synchronization pauses, the collector
introduced here requires a stop-the-world pause at the beginning and end
of the mark phase.
To avoid heap fragmentation, the allocator consists of a number of
fixed-size /sub-allocators/. Each of these sub-allocators allocators into
its own set of /segments/, themselves allocated from the block
allocator. Each segment is broken into a set of fixed-size allocation
blocks (which back allocations) in addition to a bitmap (used to track
the liveness of blocks) and some additional metadata (used also used
to track liveness).
This heap structure enables collection via mark-and-sweep, which can be
performed concurrently via a snapshot-at-the-beginning scheme (although
concurrent collection is not implemented in this patch).
The mark queue is a fairly straightforward chunked-array structure.
The representation is a bit more verbose than a typical mark queue to
accomodate a combination of two features:
* a mark FIFO, which improves the locality of marking, reducing one of
the major overheads seen in mark/sweep allocators (see [1] for
details)
* the selector optimization and indirection shortcutting, which
requires that we track where we found each reference to an object
in case we need to update the reference at a later point (e.g. when
we find that it is an indirection). See Note [Origin references in
the nonmoving collector] (in `NonMovingMark.h`) for details.
Beyond this the mark/sweep is fairly run-of-the-mill.
[1] R. Garner, S.M. Blackburn, D. Frampton. "Effective Prefetch for
Mark-Sweep Garbage Collection." ISMM 2007.
Co-Authored-By: Ben Gamari <ben at well-typed.com>
- - - - -
9cd98caa by Ben Gamari at 2019-06-19T00:52:36Z
testsuite: Add nonmoving WAY
This simply runs the compile_and_run tests with `-xn`, enabling the
nonmoving oldest generation.
- - - - -
d475002b by Ben Gamari at 2019-06-19T00:53:58Z
rts: Implement concurrent collection in the nonmoving collector
This extends the non-moving collector to allow concurrent collection.
The full design of the collector implemented here is described in detail
in a technical note
B. Gamari. "A Concurrent Garbage Collector For the Glasgow Haskell
Compiler" (2018)
This extension involves the introduction of a capability-local
remembered set, known as the /update remembered set/, which tracks
objects which may no longer be visible to the collector due to mutation.
To maintain this remembered set we introduce a write barrier on
mutations which is enabled while a concurrent mark is underway.
The update remembered set representation is similar to that of the
nonmoving mark queue, being a chunked array of `MarkEntry`s. Each
`Capability` maintains a single accumulator chunk, which it flushed
when it (a) is filled, or (b) when the nonmoving collector enters its
post-mark synchronization phase.
While the write barrier touches a significant amount of code it is
conceptually straightforward: the mutator must ensure that the referee
of any pointer it overwrites is added to the update remembered set.
However, there are a few details:
* In the case of objects with a dirty flag (e.g. `MVar`s) we can
exploit the fact that only the *first* mutation requires a write
barrier.
* Weak references, as usual, complicate things. In particular, we must
ensure that the referee of a weak object is marked if dereferenced by
the mutator. For this we (unfortunately) must introduce a read
barrier, as described in Note [Concurrent read barrier on deRefWeak#]
(in `NonMovingMark.c`).
* Stable names are also a bit tricky as described in Note [Sweeping
stable names in the concurrent collector] (`NonMovingSweep.c`).
We take quite some pains to ensure that the high thread count often seen
in parallel Haskell applications doesn't affect pause times. To this end
we allow thread stacks to be marked either by the thread itself (when it
is executed or stack-underflows) or the concurrent mark thread (if the
thread owning the stack is never scheduled). There is a non-trivial
handshake to ensure that this happens without racing which is described
in Note [StgStack dirtiness flags and concurrent marking].
Co-Authored-by: Ömer Sinan Ağacan <omer at well-typed.com>
- - - - -
fd17b200 by Ben Gamari at 2019-06-19T00:53:58Z
Nonmoving: Disable memory inventory with concurrent collection
- - - - -
dfd014a4 by Ben Gamari at 2019-06-19T00:55:36Z
Nonmoving: Allow aging and refactor static objects logic
This commit does two things:
* Allow aging of objects during the preparatory minor GC
* Refactor handling of static objects to avoid the use of a hashtable
- - - - -
ebff426c by Ben Gamari at 2019-06-19T00:55:36Z
Disable aging when doing deadlock detection GC
- - - - -
3dad5792 by Ben Gamari at 2019-06-19T00:55:36Z
More comments for aging
- - - - -
3af63ac7 by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Eliminate integer division in nonmovingBlockCount
Perf showed that the this single div was capturing up to 10% of samples
in nonmovingMark. However, the overwhelming majority of cases is looking
at small block sizes. These cases we can easily compute explicitly,
allowing the compiler to turn the division into a significantly more
efficient division-by-constant.
While the increase in source code looks scary, this all optimises down
to very nice looking assembler. At this point the only remaining
hotspots in nonmovingBlockCount are due to memory access.
- - - - -
aadf70d0 by Ben Gamari at 2019-06-19T00:56:01Z
Allocate mark queues in larger block groups
- - - - -
57ed3211 by Ben Gamari at 2019-06-19T00:56:01Z
NonMovingMark: Optimize representation of mark queue
This shortens MarkQueueEntry by 30% (one word)
- - - - -
05c68558 by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Optimize bitmap search during allocation
Use memchr instead of a open-coded loop. This is nearly twice as fast in
a synthetic benchmark.
- - - - -
35ea3341 by Ben Gamari at 2019-06-19T00:56:01Z
rts: Add prefetch macros
- - - - -
92e76eba by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Prefetch when clearing bitmaps
Ensure that the bitmap of the segmentt that we will clear next is in
cache by the time we reach it.
- - - - -
f65d3e77 by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Inline nonmovingClearAllBitmaps
- - - - -
b117a9e4 by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Fuse sweep preparation into mark prep
- - - - -
73a58b2c by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Pre-fetch during mark
This improved overall runtime on nofib's constraints test by nearly 10%.
- - - - -
4f7323a1 by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Prefetch segment header
- - - - -
d0e4ca99 by Ben Gamari at 2019-06-19T00:56:01Z
NonMoving: Optimise allocator cache behavior
Previously we would look at the segment header to determine the block
size despite the fact that we already had the block size at hand.
- - - - -
b06d9731 by Ben Gamari at 2019-06-19T00:56:01Z
NonMovingMark: Eliminate redundant check_in_nonmoving_heaps
- - - - -
24b3946d by Ben Gamari at 2019-06-19T00:56:02Z
NonMoving: Don't do major GC if one is already running
Previously we would perform a preparatory moving collection, resulting
in many things being added to the mark queue. When we finished with this
we would realize in nonmovingCollect that there was already a collection
running, in which case we would simply not run the nonmoving collector.
However, it was very easy to end up in a "treadmilling" situation: all
subsequent GC following the first failed major GC would be scheduled as
major GCs. Consequently we would continuously feed the concurrent
collector with more mark queue entries and it would never finish.
This patch aborts the major collection far earlier, meaning that we
avoid adding nonmoving objects to the mark queue and allowing the
concurrent collector to finish.
- - - - -
b6e439b4 by Ben Gamari at 2019-06-19T00:56:02Z
Nonmoving: Ensure write barrier vanishes in non-threaded RTS
- - - - -
30 changed files:
- compiler/cmm/CLabel.hs
- compiler/codeGen/StgCmmBind.hs
- compiler/codeGen/StgCmmPrim.hs
- compiler/codeGen/StgCmmUtils.hs
- includes/Cmm.h
- includes/Rts.h
- + includes/rts/NonMoving.h
- includes/rts/storage/Block.h
- includes/rts/storage/ClosureMacros.h
- includes/rts/storage/GC.h
- includes/rts/storage/TSO.h
- includes/stg/MiscClosures.h
- rts/Apply.cmm
- rts/Capability.c
- rts/Capability.h
- rts/Exception.cmm
- rts/Messages.c
- rts/PrimOps.cmm
- rts/RaiseAsync.c
- rts/RtsStartup.c
- rts/RtsSymbols.c
- rts/STM.c
- rts/Schedule.c
- rts/StableName.c
- rts/ThreadPaused.c
- rts/Threads.c
- rts/Updates.h
- rts/Weak.c
- rts/sm/Evac.c
- rts/sm/GC.c
The diff was not included because it is too large.
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/compare/87fb2d0cc1f36b6b9f1c2d4d0053898fd30f0f5f...b6e439b4e32d0ba78cf94a3701b11d5c66bec477
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/compare/87fb2d0cc1f36b6b9f1c2d4d0053898fd30f0f5f...b6e439b4e32d0ba78cf94a3701b11d5c66bec477
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20190618/0c1fe89f/attachment-0001.html>
More information about the ghc-commits
mailing list