[Git][ghc/ghc][wip/T24052] 3 commits: testsuite: Run some tests unconditionally in nonmoving ways

Ben Gamari (@bgamari) gitlab at gitlab.haskell.org
Tue Oct 3 16:23:16 UTC 2023



Ben Gamari pushed to branch wip/T24052 at Glasgow Haskell Compiler / GHC


Commits:
14b1e783 by Ben Gamari at 2023-10-03T12:22:32-04:00
testsuite: Run some tests unconditionally in nonmoving ways

Improve test coverage of the nonmoving collector by running a small
subset of tests unconditionally with the nonmoving collector.

Fixes #24052.

- - - - -
95133e41 by Ben Gamari at 2023-10-03T12:23:09-04:00
testsuite: Fix invalid string literal

`"\+"` is not a valid Python string; this should have been `r"\+"`.

- - - - -
04e32ed6 by Ben Gamari at 2023-10-03T12:23:09-04:00
rts/nonmoving: Reflow notes

- - - - -


3 changed files:

- rts/sm/NonMoving.c
- testsuite/driver/testlib.py
- testsuite/tests/concurrent/should_run/all.T


Changes:

=====================================
rts/sm/NonMoving.c
=====================================
@@ -151,9 +151,9 @@ static void nonmovingBumpEpoch(void) {
  *  3. [STW] Root collection: Here we walk over a variety of root sources
  *     and add them to the mark queue (see nonmovingCollect).
  *
- *  4. [CONC] Concurrent marking: Here we do the majority of marking concurrently
- *     with mutator execution (but with the write barrier enabled; see
- *     Note [Update remembered set]).
+ *  4. [CONC] Concurrent marking: Here we do the majority of marking
+ *     concurrently with mutator execution (but with the write barrier enabled;
+ *     see Note [Update remembered set]).
  *
  *  5. [STW] Final sync: Here we interrupt the mutators, ask them to
  *     flush their final update remembered sets, and mark any new references
@@ -218,9 +218,9 @@ static void nonmovingBumpEpoch(void) {
  *  - Note [Concurrent read barrier on deRefWeak#] (NonMovingMark.c) describes
  *    the read barrier on Weak# objects.
  *
- *  - Note [Unintentional marking in resurrectThreads] (NonMovingMark.c) describes
- *    a tricky interaction between the update remembered set flush and weak
- *    finalization.
+ *  - Note [Unintentional marking in resurrectThreads] (NonMovingMark.c)
+ *    describes a tricky interaction between the update remembered set flush and
+ *    weak finalization.
  *
  *  - Note [Origin references in the nonmoving collector] (NonMovingMark.h)
  *    describes how we implement indirection short-cutting and the selector
@@ -229,8 +229,8 @@ static void nonmovingBumpEpoch(void) {
  *  - Note [StgStack dirtiness flags and concurrent marking] (TSO.h) describes
  *    the protocol for concurrent marking of stacks.
  *
- *  - Note [Nonmoving write barrier in Perform{Put,Take}] (PrimOps.cmm) describes
- *    a tricky barrier necessary when resuming threads blocked on MVar
+ *  - Note [Nonmoving write barrier in Perform{Put,Take}] (PrimOps.cmm)
+ *    describes a tricky barrier necessary when resuming threads blocked on MVar
  *    operations.
  *
  *  - Note [Static objects under the nonmoving collector] (Storage.c) describes
@@ -240,13 +240,17 @@ static void nonmovingBumpEpoch(void) {
  *    how we use the DIRTY flags associated with MUT_VARs and TVARs to improve
  *    barrier efficiency.
  *
- *  - Note [Weak pointer processing and the non-moving GC] (MarkWeak.c) describes
- *    how weak pointers are handled when the non-moving GC is in use.
+ *  - Note [Weak pointer processing and the non-moving GC] (MarkWeak.c)
+ *    describes how weak pointers are handled when the non-moving GC is in use.
  *
  *  - Note [Sync phase marking budget] describes how we avoid long mutator
  *    pauses during the sync phase
  *
- *  - Note [Allocator sizes] goes into detail about our choice of allocator sizes.
+ *  - Note [Allocator sizes] goes into detail about our choice of allocator
+ *    sizes.
+ *
+ *  - Note [Testing the nonmoving collector] describes how we test the
+ *    collector.
  *
  * [ueno 2016]:
  *   Katsuhiro Ueno and Atsushi Ohori. 2016. A fully concurrent garbage
@@ -259,15 +263,15 @@ static void nonmovingBumpEpoch(void) {
  * Concurrency-control of non-moving garbage collection is a bit tricky. There
  * are a few things to keep in mind:
  *
- *  - Only one non-moving collection may be active at a time. This is enforced by the
- *    concurrent_coll_running flag, which is set when a collection is on-going. If
- *    we attempt to initiate a new collection while this is set we wait on the
- *    concurrent_coll_finished condition variable, which signals when the
- *    active collection finishes.
+ *  - Only one non-moving collection may be active at a time. This is enforced
+ *    by the concurrent_coll_running flag, which is set when a collection is
+ *    on-going. If we attempt to initiate a new collection while this is set we
+ *    wait on the concurrent_coll_finished condition variable, which signals
+ *    when the active collection finishes.
  *
- *  - In between the mark and sweep phases the non-moving collector must synchronize
- *    with mutator threads to collect and mark their final update remembered
- *    sets. This is accomplished using
+ *  - In between the mark and sweep phases the non-moving collector must
+ *    synchronize with mutator threads to collect and mark their final update
+ *    remembered sets. This is accomplished using
  *    stopAllCapabilitiesWith(SYNC_FLUSH_UPD_REM_SET). Capabilities are held
  *    the final mark has concluded.
  *
@@ -361,9 +365,9 @@ static void nonmovingBumpEpoch(void) {
  *        ╰─────────────────╯
  *                    ┆
  *
- * In this case we have a TSO blocked on a dead MVar. Because the MVAR_TSO_QUEUE on
- * which it is blocked lives in the moving heap, the TSO is necessarily on the
- * oldest generation's mut_list. As in Note [Aging under the non-moving
+ * In this case we have a TSO blocked on a dead MVar. Because the MVAR_TSO_QUEUE
+ * on which it is blocked lives in the moving heap, the TSO is necessarily on
+ * the oldest generation's mut_list. As in Note [Aging under the non-moving
  * collector], the MVAR_TSO_QUEUE will be evacuated. If MVAR_TSO_QUEUE is aged
  * (e.g. evacuated to the young generation) then the MVAR will be added to the
  * mark queue. Consequently, we will falsely conclude that the MVAR is still
@@ -387,9 +391,9 @@ static void nonmovingBumpEpoch(void) {
  * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  * The nonmoving collector uses an approximate heuristic for reporting live
  * data quantity. Specifically, during mark we record how much live data we
- * find in nonmoving_segment_live_words. At the end of mark this is combined with nonmoving_large_words
- * and nonmoving_compact_words, and we declare this amount to
- * be how much live data we have on in the nonmoving heap (by setting
+ * find in nonmoving_segment_live_words. At the end of mark this is combined
+ * with nonmoving_large_words and nonmoving_compact_words, and we declare this
+ * amount to be how much live data we have on in the nonmoving heap (by setting
  * oldest_gen->live_estimate).
  *
  * In addition, we update oldest_gen->live_estimate every time we fill a
@@ -413,10 +417,10 @@ static void nonmovingBumpEpoch(void) {
  *  - Minor collections assume that all sparks living in the non-moving heap
  *    are reachable.
  *
- *  - Major collections prune the spark queue during the final sync. This pruning
- *    assumes that all sparks in the young generations are reachable (since the
- *    BF_EVACUATED flag won't be set on the nursery blocks) and will consequently
- *    only prune dead sparks living in the non-moving heap.
+ *  - Major collections prune the spark queue during the final sync. This
+ * pruning assumes that all sparks in the young generations are reachable (since
+ * the BF_EVACUATED flag won't be set on the nursery blocks) and will
+ * consequently only prune dead sparks living in the non-moving heap.
  *
  *
  * Note [Dirty flags in the non-moving collector]
@@ -439,8 +443,8 @@ static void nonmovingBumpEpoch(void) {
  * In the non-moving collector we use the same dirty flag to implement a
  * related optimisation on the non-moving write barrier: Specifically, the
  * snapshot invariant only requires that the non-moving write barrier applies
- * to the *first* mutation to an object after collection begins. To achieve this,
- * we impose the following invariant:
+ * to the *first* mutation to an object after collection begins. To achieve
+ * this, we impose the following invariant:
  *
  *     An object being marked as dirty implies that all of its fields are on
  *     the mark queue (or, equivalently, update remembered set).
@@ -492,8 +496,8 @@ static void nonmovingBumpEpoch(void) {
  *                        ┊
  *
  * This is bad. When we resume mutation a mutator may mutate MVAR A; since it's
- * already dirty we would fail to add Y to the update remembered set, breaking the
- * snapshot invariant and potentially losing track of the liveness of Z.
+ * already dirty we would fail to add Y to the update remembered set, breaking
+ * the snapshot invariant and potentially losing track of the liveness of Z.
  *
  * To avoid this nonmovingScavengeOne we eagerly pushes the values of the
  * fields of all objects which it fails to evacuate (e.g. MVAR A) to the update
@@ -535,8 +539,9 @@ static void nonmovingBumpEpoch(void) {
  * Note [Allocator sizes]
  * ~~~~~~~~~~~~~~~~~~~~~~
  * Our choice of allocator sizes has to balance several considerations:
- * - Allocator sizes should be available for the most commonly request block sizes,
- *   in order to avoid excessive waste from rounding up to the next size (internal fragmentation).
+ * - Allocator sizes should be available for the most commonly request block
+ *   sizes, in order to avoid excessive waste from rounding up to the next size
+ *   (internal fragmentation).
  * - It should be possible to efficiently determine which allocator services
  *   a certain block size.
  * - The amount of allocators should be kept down to avoid overheads
@@ -548,18 +553,37 @@ static void nonmovingBumpEpoch(void) {
  *   arbitrary allocator sizes, we need to do some precomputation and make
  *   use of the integer division by constants optimisation.
  *
- * We currenlty try to balance these considerations by adopting the following scheme.
- * We have nonmoving_alloca_dense_cnt "dense" allocators starting with size
- * NONMOVING_ALLOCA0, and incrementing by NONMOVING_ALLOCA_DENSE_INCREMENT.
+ * We currenlty try to balance these considerations by adopting the following
+ * scheme. We have nonmoving_alloca_dense_cnt "dense" allocators starting with
+ * size NONMOVING_ALLOCA0, and incrementing by NONMOVING_ALLOCA_DENSE_INCREMENT.
  * These service the vast majority of allocations.
  * In practice, Haskell programs tend to allocate a lot of small objects.
  *
- * Other allocations are handled by a family of "sparse" allocators, each providing
- * blocks up to a power of 2. This places an upper bound on the waste at half the
- * required block size.
+ * Other allocations are handled by a family of "sparse" allocators, each
+ * providing blocks up to a power of 2. This places an upper bound on the waste
+ * at half the required block size.
  *
  * See #23340
  *
+ *
+ * Note [Testing the nonmoving collector]
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ * The testsuite has four "ways" which test the nonmoving collector:
+ *
+ *  - nonmoving: runs tests under the nonmoving collector running in
+ *    non-concurrent mode (i.e. using the non-threaded runtime)
+ *  - nonmoving_thr: runs tests under the collector running in concurrent
+ *    mode (with the threaded runtime)
+ *  - nonmoving_thr_sanity: runs tests with concurrent collection and
+ *    sanity checking (i.e. `+RTS -DS`)
+ *  - nonmoving_thr_ghc: compiles tests with `ghc +RTS --nonmoving-gc -RTS`
+ *    as GHC itself tends to be a good smoke test of the collector.
+ *
+ * To avoid blowing up validation times, we do not run any of these ways in the
+ * default "normal" test speed. To ensure that we catch regressions in during
+ * normal validation we do run a small number of tests in these ways. These
+ * tests are identified by the `nonmoving_test` test modifier.
+ *
  */
 
 memcount nonmoving_segment_live_words = 0;


=====================================
testsuite/driver/testlib.py
=====================================
@@ -149,6 +149,10 @@ def js_broken( bug: IssueNumber ):
     else:
         return normal;
 
+def nonmoving_test( name, opts ):
+    """ See Note [Testing the nonmoving collector] in rts/sm/NonMoving.c """
+    return extra_ways(['nonmoving', 'nonmoving_thr_sanity', 'nonmoving_thr_ghc'])(name, opts)
+
 def expect_fail( name, opts ):
     # The compiler, testdriver, OS or platform is missing a certain
     # feature, and we don't plan to or can't fix it now or in the
@@ -1023,8 +1027,8 @@ def normalise_win32_io_errors(name, opts):
 def normalise_version_( *pkgs ):
     def normalise_version__( str ):
         # (name)(-version)(-hash)(-components)
-        return re.sub('(' + '|'.join(map(re.escape,pkgs)) + ')-[0-9.]+(-[0-9a-zA-Z\+]+)?(-[0-9a-zA-Z]+)?',
-                      '\\1-<VERSION>-<HASH>', str)
+        return re.sub('(' + '|'.join(map(re.escape,pkgs)) + r')-[0-9.]+(-[0-9a-zA-Z\+]+)?(-[0-9a-zA-Z]+)?',
+                      r'\1-<VERSION>-<HASH>', str)
     return normalise_version__
 
 def normalise_version( *pkgs ):


=====================================
testsuite/tests/concurrent/should_run/all.T
=====================================
@@ -1,3 +1,5 @@
+setTestOpts(nonmoving_test)
+
 # -----------------------------------------------------------------------------
 # These tests we do even for 'make fast'
 



View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/5cc3de7ba741055eec293cf790b68de52631636e...04e32ed6bc1c4c1dbe5b3dcf4ac6294d2a836dd9

-- 
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/5cc3de7ba741055eec293cf790b68de52631636e...04e32ed6bc1c4c1dbe5b3dcf4ac6294d2a836dd9
You're receiving this email because of your account on gitlab.haskell.org.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20231003/d0bc1d86/attachment-0001.html>


More information about the ghc-commits mailing list