[Git][ghc/ghc][ghc-8.10] 13 commits: rts/Sanity: Avoid nasty race in weak pointer sanity-checking
Ben Gamari
gitlab at gitlab.haskell.org
Mon Dec 7 20:34:55 UTC 2020
Ben Gamari pushed to branch ghc-8.10 at Glasgow Haskell Compiler / GHC
Commits:
d7e4813c by Ben Gamari at 2020-12-07T15:10:41+00:00
rts/Sanity: Avoid nasty race in weak pointer sanity-checking
See Note [Racing weak pointer evacuation] for all of the gory details.
- - - - -
ef241371 by GHC GitLab CI at 2020-12-07T15:11:57+00:00
nonmoving: Fix regression from TSAN work
The TSAN rework (specifically aad1f803) introduced a subtle regression
in GC.c, swapping `g0` in place of `gen`. Whoops!
Fixes #18997.
(cherry picked from commit 21c807df67afe1aee7bf4a964a00cc78ef19e00f)
- - - - -
38f2f627 by GHC GitLab CI at 2020-12-07T15:12:10+00:00
rts/Messages: Add missing write barrier in THROWTO message update
After a THROWTO message has been handle the message closure is
overwritten by a NULL message. We must ensure that the original
closure's pointers continue to be visible to the nonmoving GC.
(cherry picked from commit 6c2faf158fd26fc06b03c9bd11b6d2cf8e8db572)
- - - - -
813c5219 by GHC GitLab CI at 2020-12-07T15:12:51+00:00
nonmoving: Add missing write barrier in shrinkSmallByteArray
(cherry picked from commit 35c22991ae5c22b10ca1a81f0aa888d1939f0b3f)
- - - - -
de7f2203 by GHC GitLab CI at 2020-12-07T15:13:01+00:00
Updates: Don't zero slop until closure has been pushed
Ensure that the the free variables have been pushed to the update
remembered set before we zero the slop.
(cherry picked from commit 134f759926bb4163d7ab97e72ce7209ed42f98b9)
- - - - -
2ba6b268 by GHC GitLab CI at 2020-12-07T15:13:15+00:00
OSThreads: Fix error code checking
pthread_join returns its error code and apparently doesn't set errno.
(cherry picked from commit c488ac737e8ca3813fe6db069cbeb7abba00cfb9)
- - - - -
7122ff03 by GHC GitLab CI at 2020-12-07T15:13:31+00:00
nonmoving: Don't join to mark_thread on shutdown
The mark thread is not joinable as we detach from it on creation.
(cherry picked from commit ca1ef0e758a3fb787691529a0f8149e9d10b1d00)
- - - - -
ada68f55 by Ben Gamari at 2020-12-07T15:13:46+00:00
nonmoving: Add reference to Ueno 2016
(cherry picked from commit a3b8375eeb2ce9d2e30f8269f5b489c5bcacc69f)
- - - - -
4f01c8b4 by GHC GitLab CI at 2020-12-07T15:14:20+00:00
nonmoving: Ensure that evacuated large objects are marked
See Note [Non-moving GC: Marking evacuated objects].
(cherry picked from commit b416189e4004506b89f06f147be37e76f4cd507f)
- - - - -
658b7fc9 by Ben Gamari at 2020-12-07T15:14:40+00:00
rts/linker: Align bssSize to page size when mapping symbol extras
We place symbol_extras right after bss. We also need
to ensure that symbol_extras can be mprotect'd independently from the
rest of the image. To ensure this we round up the size of bss to a page
boundary, thus ensuring that symbol_extras is also page-aligned.
(cherry picked from commit 9f40cf6ca9fb24dbc55f7eae43e2b89aa12bf251)
(cherry picked from commit 4b83b6a8f8ac08e81b6e75c47f133e3ed6bdea95)
- - - - -
abab9157 by Ben Gamari at 2020-12-07T15:15:20+00:00
rts: Allocate MBlocks with MAP_TOP_DOWN on Windows
As noted in #18991, we would previously allocate heap in low memory.
Due to this the linker, which typically *needs* low memory, would end up
competing with the heap. In longer builds we end up running out of
low memory entirely, leading to linking failures.
(cherry picked from commit a1a75aa9be2c133dd1372a08eeb6a92c31688df7)
- - - - -
b160abf5 by Ben Gamari at 2020-12-07T15:16:23+00:00
rts/linker: Ensure that .rodata is aligned to 16 bytes
Pulled out of !4310.
(cherry picked from commit be408b86c9125dedd2f83e9701ea9f2e499c8dd4)
- - - - -
e7dbdbfa by GHC GitLab CI at 2020-12-07T15:19:45+00:00
Bump text submodule to 1.2.4.1-rc1
Per request of @phadej.
- - - - -
14 changed files:
- libraries/text
- rts/LinkerInternals.h
- rts/Messages.c
- rts/Messages.h
- rts/PrimOps.cmm
- rts/RaiseAsync.c
- rts/Updates.h
- rts/linker/SymbolExtras.c
- rts/posix/OSThreads.c
- rts/sm/Evac.c
- rts/sm/GC.c
- rts/sm/NonMoving.c
- rts/sm/Sanity.c
- rts/win32/OSMem.c
Changes:
=====================================
libraries/text
=====================================
@@ -1 +1 @@
-Subproject commit e07c14940c25f33fe5b282912d745d3a79dd4ade
+Subproject commit be54b46175db603aafea3e3f19a6a75e87a29828
=====================================
rts/LinkerInternals.h
=====================================
@@ -135,7 +135,7 @@ typedef struct _Segment {
int n_sections;
} Segment;
-#if defined(powerpc_HOST_ARCH) || defined(x86_64_HOST_ARCH)
+#if defined(powerpc_HOST_ARCH) || defined(x86_64_HOST_ARCH) || defined(aarch64_HOST_ARCH)
#define NEED_SYMBOL_EXTRAS 1
#endif
=====================================
rts/Messages.c
=====================================
@@ -100,7 +100,7 @@ loop:
case THROWTO_SUCCESS: {
// this message is done
StgTSO *source = t->source;
- doneWithMsgThrowTo(t);
+ doneWithMsgThrowTo(cap, t);
tryWakeupThread(cap, source);
break;
}
=====================================
rts/Messages.h
=====================================
@@ -23,8 +23,16 @@ void sendMessage (Capability *from_cap, Capability *to_cap, Message *msg);
#include "SMPClosureOps.h"
INLINE_HEADER void
-doneWithMsgThrowTo (MessageThrowTo *m)
+doneWithMsgThrowTo (Capability *cap, MessageThrowTo *m)
{
+ // The message better be locked
+ ASSERT(m->header.info == &stg_WHITEHOLE_info);
+ IF_NONMOVING_WRITE_BARRIER_ENABLED {
+ updateRemembSetPushClosure(cap, (StgClosure *) m->link);
+ updateRemembSetPushClosure(cap, (StgClosure *) m->source);
+ updateRemembSetPushClosure(cap, (StgClosure *) m->target);
+ updateRemembSetPushClosure(cap, (StgClosure *) m->exception);
+ }
OVERWRITING_CLOSURE((StgClosure*)m);
unlockClosure((StgClosure*)m, &stg_MSG_NULL_info);
LDV_RECORD_CREATE(m);
=====================================
rts/PrimOps.cmm
=====================================
@@ -233,6 +233,22 @@ stg_shrinkSmallMutableArrayzh ( gcptr mba, W_ new_size )
OVERWRITING_CLOSURE_OFS(mba, (BYTES_TO_WDS(SIZEOF_StgSmallMutArrPtrs) +
new_size));
+
+ IF_NONMOVING_WRITE_BARRIER_ENABLED {
+ // Ensure that the elements we are about to shrink out of existence
+ // remain visible to the non-moving collector.
+ W_ p, end;
+ p = mba + SIZEOF_StgSmallMutArrPtrs + WDS(new_size);
+ end = mba + SIZEOF_StgSmallMutArrPtrs + WDS(StgSmallMutArrPtrs_ptrs(mba));
+again:
+ ccall updateRemembSetPushClosure_(BaseReg "ptr",
+ W_[p] "ptr");
+ if (p < end) {
+ p = p + SIZEOF_W;
+ goto again;
+ }
+ }
+
StgSmallMutArrPtrs_ptrs(mba) = new_size;
// See the comments in overwritingClosureOfs for an explanation
// of the interaction with LDV profiling.
=====================================
rts/RaiseAsync.c
=====================================
@@ -336,7 +336,7 @@ check_target:
}
// nobody else can wake up this TSO after we claim the message
- doneWithMsgThrowTo(m);
+ doneWithMsgThrowTo(cap, m);
raiseAsync(cap, target, msg->exception, false, NULL);
return THROWTO_SUCCESS;
@@ -577,7 +577,7 @@ maybePerformBlockedException (Capability *cap, StgTSO *tso)
throwToSingleThreaded(cap, msg->target, msg->exception);
source = msg->source;
- doneWithMsgThrowTo(msg);
+ doneWithMsgThrowTo(cap, msg);
tryWakeupThread(cap, source);
return 1;
}
@@ -599,7 +599,7 @@ awakenBlockedExceptionQueue (Capability *cap, StgTSO *tso)
i = lockClosure((StgClosure *)msg);
if (i != &stg_MSG_NULL_info) {
source = msg->source;
- doneWithMsgThrowTo(msg);
+ doneWithMsgThrowTo(cap, msg);
tryWakeupThread(cap, source);
} else {
unlockClosure((StgClosure *)msg,i);
@@ -696,7 +696,7 @@ removeFromQueues(Capability *cap, StgTSO *tso)
// ASSERT(m->header.info == &stg_WHITEHOLE_info);
// unlock and revoke it at the same time
- doneWithMsgThrowTo(m);
+ doneWithMsgThrowTo(cap, m);
break;
}
=====================================
rts/Updates.h
=====================================
@@ -49,7 +49,6 @@
W_ bd; \
\
prim_write_barrier; \
- OVERWRITING_CLOSURE(p1); \
bd = Bdescr(p1); \
if (bdescr_gen_no(bd) != 0 :: bits16) { \
IF_NONMOVING_WRITE_BARRIER_ENABLED { \
@@ -60,6 +59,7 @@
} else { \
TICK_UPD_NEW_IND(); \
} \
+ OVERWRITING_CLOSURE(p1); \
StgInd_indirectee(p1) = p2; \
prim_write_barrier; \
SET_INFO(p1, stg_BLACKHOLE_info); \
=====================================
rts/linker/SymbolExtras.c
=====================================
@@ -77,7 +77,9 @@ int ocAllocateExtras(ObjectCode* oc, int count, int first, int bssSize)
/* N.B. We currently can't mark symbol extras as non-executable in this
* case. */
size_t n = roundUpToPage(oc->fileSize);
- bssSize = roundUpToAlign(bssSize, 8);
+ // round bssSize up to the nearest page size since we need to ensure that
+ // symbol_extras is aligned to a page boundary so it can be mprotect'd.
+ bssSize = roundUpToPage(bssSize);
size_t allocated_size = n + bssSize + extras_size;
void *new = mmapForLinker(allocated_size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS, -1, 0);
if (new) {
=====================================
rts/posix/OSThreads.c
=====================================
@@ -380,8 +380,9 @@ interruptOSThread (OSThreadId id)
void
joinOSThread (OSThreadId id)
{
- if (pthread_join(id, NULL) != 0) {
- sysErrorBelch("joinOSThread: error %d", errno);
+ int ret = pthread_join(id, NULL);
+ if (ret != 0) {
+ sysErrorBelch("joinOSThread: error %d", ret);
}
}
=====================================
rts/sm/Evac.c
=====================================
@@ -109,6 +109,8 @@ alloc_for_copy (uint32_t size, uint32_t gen_no)
//
// However, if we are in a deadlock detection GC then we disable aging
// so there is no need.
+ //
+ // See Note [Non-moving GC: Marking evacuated objects].
if (major_gc && !deadlock_detect_gc)
markQueuePushClosureGC(&gct->cap->upd_rem_set.queue, (StgClosure *) to);
return to;
@@ -134,7 +136,58 @@ alloc_for_copy (uint32_t size, uint32_t gen_no)
The evacuate() code
-------------------------------------------------------------------------- */
-/* size is in words */
+/*
+ * Note [Non-moving GC: Marking evacuated objects]
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * When the non-moving collector is in use we must be careful to ensure that any
+ * references to objects in the non-moving generation from younger generations
+ * are pushed to the mark queue.
+ *
+ * In particular we need to ensure that we handle newly-promoted objects are
+ * correctly marked. For instance, consider this case:
+ *
+ * generation 0 generation 1
+ * ────────────── ──────────────
+ *
+ * ┌───────┐
+ * ┌───────┐ │ A │
+ * │ B │ ◁────────────────────────── │ │
+ * │ │ ──┬─────────────────┐ └───────┘
+ * └───────┘ ┆ after GC │
+ * ┆ │
+ * ┌───────┐ ┆ before GC │ ┌───────┐
+ * │ C │ ◁┄┘ └─────▷ │ C' │
+ * │ │ │ │
+ * └───────┘ └───────┘
+ *
+ *
+ * In this case object C started off in generation 0 and was evacuated into
+ * generation 1 during the preparatory GC. However, the only reference to C'
+ * is from B, which lives in the generation 0 (via aging); this reference will
+ * not be visible to the concurrent non-moving collector (which can only
+ * traverse the generation 1 heap). Consequently, upon evacuating C we need to
+ * ensure that C' is added to the update remembered set as we know that it will
+ * continue to be reachable via B (which is assumed to be reachable as it lives
+ * in a younger generation).
+ *
+ * Where this happens depends upon the type of the object (e.g. C'):
+ *
+ * - In the case of "normal" small heap-allocated objects this happens in
+ * alloc_for_copy.
+ * - In the case of compact region this happens in evacuate_compact.
+ * - In the case of large objects this happens in evacuate_large.
+ *
+ * See also Note [Aging under the non-moving collector] in NonMoving.c.
+ *
+ */
+
+/* size is in words
+
+ We want to *always* inline this as often the size of the closure is static,
+ which allows unrolling of the copy loop.
+
+ */
STATIC_INLINE GNUC_ATTR_HOT void
copy_tag(StgClosure **p, const StgInfoTable *info,
StgClosure *src, uint32_t size, uint32_t gen_no, StgWord tag)
@@ -351,6 +404,9 @@ evacuate_large(StgPtr p)
__atomic_fetch_or(&bd->flags, BF_EVACUATED, __ATOMIC_ACQ_REL);
if (RTS_UNLIKELY(RtsFlags.GcFlags.useNonmoving && new_gen == oldest_gen)) {
__atomic_fetch_or(&bd->flags, BF_NONMOVING, __ATOMIC_ACQ_REL);
+
+ // See Note [Non-moving GC: Marking evacuated objects].
+ markQueuePushClosureGC(&gct->cap->upd_rem_set.queue, (StgClosure *) p);
}
initBdescr(bd, new_gen, new_gen->to);
@@ -505,6 +561,9 @@ evacuate_compact (StgPtr p)
bd->flags |= BF_EVACUATED;
if (RTS_UNLIKELY(RtsFlags.GcFlags.useNonmoving && new_gen == oldest_gen)) {
__atomic_fetch_or(&bd->flags, BF_NONMOVING, __ATOMIC_RELAXED);
+
+ // See Note [Non-moving GC: Marking evacuated objects].
+ markQueuePushClosureGC(&gct->cap->upd_rem_set.queue, (StgClosure *) str);
}
initBdescr(bd, new_gen, new_gen->to);
@@ -690,13 +749,6 @@ loop:
*/
if (flags & BF_LARGE) {
evacuate_large((P_)q);
-
- // We may have evacuated the block to the nonmoving generation. If so
- // we need to make sure it is added to the mark queue since the only
- // reference to it may be from the moving heap.
- if (major_gc && flags & BF_NONMOVING && !deadlock_detect_gc) {
- markQueuePushClosureGC(&gct->cap->upd_rem_set.queue, q);
- }
return;
}
=====================================
rts/sm/GC.c
=====================================
@@ -1689,13 +1689,8 @@ collect_gct_blocks (void)
static void
collect_pinned_object_blocks (void)
{
- generation *gen;
const bool use_nonmoving = RtsFlags.GcFlags.useNonmoving;
- if (use_nonmoving && major_gc) {
- gen = oldest_gen;
- } else {
- gen = g0;
- }
+ generation *const gen = (use_nonmoving && major_gc) ? oldest_gen : g0;
for (uint32_t n = 0; n < n_capabilities; n++) {
bdescr *last = NULL;
@@ -1720,7 +1715,7 @@ collect_pinned_object_blocks (void)
if (gen->large_objects != NULL) {
gen->large_objects->u.back = last;
}
- g0->large_objects = RELAXED_LOAD(&capabilities[n]->pinned_object_blocks);
+ gen->large_objects = RELAXED_LOAD(&capabilities[n]->pinned_object_blocks);
RELAXED_STORE(&capabilities[n]->pinned_object_blocks, NULL);
}
}
=====================================
rts/sm/NonMoving.c
=====================================
@@ -191,8 +191,8 @@ Mutex concurrent_coll_finished_lock;
* === Other references ===
*
* Apart from the design document in docs/storage/nonmoving-gc and the Ueno
- * 2016 paper (TODO citation) from which it drew inspiration, there are a
- * variety of other relevant Notes scattered throughout the tree:
+ * 2016 paper [ueno 2016] from which it drew inspiration, there are a variety
+ * of other relevant Notes scattered throughout the tree:
*
* - Note [Concurrent non-moving collection] (NonMoving.c) describes
* concurrency control of the nonmoving collector
@@ -204,6 +204,10 @@ Mutex concurrent_coll_finished_lock;
* - Note [Aging under the non-moving collector] (NonMoving.c) describes how
* we accomodate aging
*
+ * - Note [Non-moving GC: Marking evacuated objects] (Evac.c) describes how
+ * non-moving objects reached by evacuate() are marked, which is necessary
+ * due to aging.
+ *
* - Note [Large objects in the non-moving collector] (NonMovingMark.c)
* describes how we track large objects.
*
@@ -232,6 +236,11 @@ Mutex concurrent_coll_finished_lock;
* how we use the DIRTY flags associated with MUT_VARs and TVARs to improve
* barrier efficiency.
*
+ * [ueno 2016]:
+ * Katsuhiro Ueno and Atsushi Ohori. 2016. A fully concurrent garbage
+ * collector for functional programs on multicore processors. SIGPLAN Not. 51,
+ * 9 (September 2016), 421–433. DOI:https://doi.org/10.1145/3022670.2951944
+ *
*
* Note [Concurrent non-moving collection]
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -312,6 +321,8 @@ Mutex concurrent_coll_finished_lock;
*
* The non-moving collector will come to C in the mark queue and mark it.
*
+ * The implementation details of this are described in Note [Non-moving GC:
+ * Marking evacuated objects] in Evac.c.
*
* Note [Deadlock detection under the non-moving collector]
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -726,7 +737,6 @@ void nonmovingStop(void)
"waiting for nonmoving collector thread to terminate");
ACQUIRE_LOCK(&concurrent_coll_finished_lock);
waitCondition(&concurrent_coll_finished, &concurrent_coll_finished_lock);
- joinOSThread(mark_thread);
}
#endif
}
=====================================
rts/sm/Sanity.c
=====================================
@@ -224,6 +224,111 @@ checkClosureProfSanity(const StgClosure *p)
}
#endif
+/* Note [Racing weak pointer evacuation]
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ * While debugging a GC crash (#18919) I noticed a spurious crash due to the
+ * end-of-GC sanity check stumbling across a weak pointer with unevacuated key.
+ * This can happen when two GC threads race to evacuate a weak pointer.
+ * Specifically, we start out with a heap with a weak pointer reachable
+ * from both a generation's weak pointer list and some other root-reachable
+ * closure (e.g. a Just constructor):
+ *
+ * O W
+ * ┌──────────┐ ┌──────────┐
+ * Root ────→ │ Just │ ╭───→ │ Weak# │ ←─────── weak_ptr_list
+ * Set ├──────────┤ │ ├──────────┤
+ * │ │ ────╯ │ value │ ─→ ...
+ * └──────────┘ │ key │ ───╮ K
+ * │ ... │ │ ┌──────────┐
+ * └──────────┘ ╰──→ │ ... │
+ * ├──────────┤
+ *
+ * The situation proceeds as follows:
+ *
+ * 1. Thread A initiates a GC, wakes up the GC worker threads, and starts
+ * evacuating roots.
+ * 2. Thread A evacuates a weak pointer object O to location O'.
+ * 3. Thread A fills the block where O' lives and pushes it to its
+ * work-stealing queue.
+ * 4. Thread B steals the O' block and starts scavenging it.
+ * 5. Thread A enters markWeakPtrList.
+ * 6. Thread A starts evacuating W, resulting in Wb'.
+ * 7. Thread B scavenges O', evacuating W', resulting in Wa'.
+ * 8. Thread A and B are now racing to evacuate W. Only one will win the race
+ * (due to the CAS in copy_tag). Let the winning copy be called W'.
+ * 9. W will be replaced by a forwarding pointer to the winning copy, W'.
+ * 10. Whichever thread loses the race will retry evacuation, see
+ * that W has already been evacuated, and proceed as usual.
+ * 10. W' will get added to weak_ptr_list by markWeakPtrList.
+ * 11. Eventually W' will be scavenged.
+ * 12. traverseWeakPtrList will see that W' has been scavenged and evacuate the
+ * its key.
+ * 13. However, the copy that lost the race is not on `weak_ptr_list`
+ * and will therefore never get its `key` field scavenged (since
+ * `traverseWeakPtrList` will never see it).
+ *
+ * Now the heap looks like:
+ *
+ * O' W (from-space)
+ * ┌──────────┐ ┌──────────┐
+ * Root ────→ │ Just │ │ Fwd-ptr │ ───────────╮
+ * Set ├──────────┤ ├──────────┤ │
+ * │ │ ────╮ │ value │ ─→ ... │
+ * └──────────┘ │ │ key │ ────────────────────────╮
+ * │ │ ... │ │ │
+ * │ └──────────┘ │ │
+ * │ │ │
+ * │ Wa' │ │
+ * │ ┌──────────┐ ╭────╯ │
+ * ╰───→ │ Weak# │ ←─────┤ │
+ * ├──────────┤ ╰─ weak_ptr_list │
+ * │ value │ ─→ ... │
+ * │ key │ ───╮ K' │
+ * │ ... │ │ ┌──────────┐ │
+ * └──────────┘ ╰──→ │ ... │ │
+ * ├──────────┤ │
+ * Wb' │
+ * ┌──────────┐ │
+ * │ Weak# │ │
+ * ├──────────┤ │
+ * │ value │ ─→ ... │
+ * │ key │ ───╮ K (from-space) │
+ * │ ... │ │ ┌──────────┐ │
+ * └──────────┘ ╰──→ │ 0xaaaaa │ ←──╯
+ * ├──────────┤
+ *
+ *
+ * Without sanity checking this is fine; we have introduced a spurious copy of
+ * W, Wb' into the heap but it is unreachable and therefore won't cause any
+ * trouble. However, with sanity checking we may encounter this spurious copy
+ * when walking the heap. Moreover, this copy was never added to weak_ptr_list,
+ * meaning that its key field (along with the other fields mark as
+ * non-pointers) will not get scavenged and will therefore point into
+ * from-space.
+ *
+ * To avoid this checkClosure skips over the key field when it sees a weak
+ * pointer. Note that all fields of Wb' *other* than the key field should be
+ * valid, so we don't skip the closure entirely.
+ *
+ * We then do additional checking of all closures on the weak_ptr_lists, where
+ * we *do* check `key`.
+ */
+
+// Check validity of objects on weak_ptr_list.
+// See Note [Racing weak pointer evacuation].
+static void
+checkGenWeakPtrList( uint32_t g )
+{
+ for (StgWeak *w = generations[g].weak_ptr_list; w != NULL; w = w->link) {
+ ASSERT(LOOKS_LIKE_CLOSURE_PTR(w));
+ ASSERT(w->header.info == &stg_WEAK_info);
+ ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->key));
+ ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->value));
+ ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->finalizer));
+ ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->cfinalizers));
+ }
+}
+
// Returns closure size in words
StgOffset
checkClosure( const StgClosure* p )
@@ -343,12 +448,9 @@ checkClosure( const StgClosure* p )
* representative of the actual layout.
*/
{ StgWeak *w = (StgWeak *)p;
- ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->key));
- ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->value));
- ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->finalizer));
- if (w->link) {
- ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->link));
- }
+ // N.B. Checking most of the fields here is not safe.
+ // See Note [Racing weak pointer evacuation] for why.
+ ASSERT(LOOKS_LIKE_CLOSURE_PTR(w->cfinalizers));
return sizeW_fromITBL(info);
}
@@ -851,6 +953,12 @@ static void checkGeneration (generation *gen,
checkHeapChain(ws->scavd_list);
}
+ // Check weak pointer lists
+ // See Note [Racing weak pointer evacuation].
+ for (uint32_t g = 0; g < RtsFlags.GcFlags.generations; g++) {
+ checkGenWeakPtrList(g);
+ }
+
checkLargeObjects(gen->large_objects);
checkCompactObjects(gen->compact_objects);
}
=====================================
rts/win32/OSMem.c
=====================================
@@ -67,8 +67,11 @@ allocNew(uint32_t n) {
alloc_rec* rec;
rec = (alloc_rec*)stgMallocBytes(sizeof(alloc_rec),"getMBlocks: allocNew");
rec->size = ((W_)n+1)*MBLOCK_SIZE;
+ // N.B. We use MEM_TOP_DOWN here to ensure that we leave the bottom of the
+ // address space available for the linker and libraries, which in general
+ // want to live in low memory. See #18991.
rec->base =
- VirtualAlloc(NULL, rec->size, MEM_RESERVE, PAGE_READWRITE);
+ VirtualAlloc(NULL, rec->size, MEM_RESERVE | MEM_TOP_DOWN, PAGE_READWRITE);
if(rec->base==0) {
stgFree((void*)rec);
rec=0;
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/391a3f6d3776a069534a91e34e5915a0c8d0391b...e7dbdbfadf897b246a0b7d669bb559d18030c3c6
--
View it on GitLab: https://gitlab.haskell.org/ghc/ghc/-/compare/391a3f6d3776a069534a91e34e5915a0c8d0391b...e7dbdbfadf897b246a0b7d669bb559d18030c3c6
You're receiving this email because of your account on gitlab.haskell.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-commits/attachments/20201207/e5cd86bf/attachment-0001.html>
More information about the ghc-commits
mailing list