Cmm Memory Model (Understanding #15449)

Travis Whitaker pi.boy.travis at
Thu Nov 29 04:44:17 UTC 2018

Hello GHC Devs,

I'm trying to get my head around ticket #15449 ( This gist of things is that
GHC generates incorrect aarch64 code that causes memory corruption in
multithreaded programs run on out-of-order machines. User trommler
discovered that similar issues are present on PowerPC, and indeed ARMv7 and
PowerPC support the same types of load/store reorderings. The LLVM code
emitted by GHC may be incorrect with respect to LLVM's memory model, but
this isn't a problem on architectures with minimal reordering like x86.

I had initially thought that GHC simply wasn't emitting the appropriate
LLVM fences; there's an elephant-gun-approach here ( that
guards each atomic operation with a full barrier. I still believe that GHC
is omitting necessary LLVM fences, but this change is insufficient to fix
the behavior of the test case (which is simply GHC itself compiling a test
package with '-jN', N > 1).

It seems there's a long and foggy history of the Cmm memory model. Edward
Yang discusses this a bit in his post here (
and issues similar to #15449 have plagued GHC in the past, like #12469 ( Worryingly, GHC only has
MO_WriteBarrier, whereas PowerPC and ARMv7 really need read, write, and
full memory barriers. On ARM an instruction memory barrier might be
required as well, but I don't know enough about STG/Cmm to say for sure,
and it'd likely be LLVM's responsibility to emit that anyway.

I'm hoping that someone with more tribal knowledge than I might be able to
give me some pointers with regards to the following areas:

   - Does STG itself have anything like a memory model? My intuition says
   'definitely not', but given that STG expressions may contain Cmm operations
   (via StgCmmPrim), there may be STG-to-STG transformations that need to care
   about the target machine's memory model.
   - With respect to Cmm, what reorderings does GHC perform? What are the
   relevant parts of the compiler to begin studying?
   - Are the LLVM atomics that GHC emits correct with respect to the LLVM
   memory model? As it stands now LLVM fences are only emitted for
   MO_WriteBarrier. Without fences accompanying the atomics, it seems the LLVM
   compiler could float dependent loads/stores past atomic operations.
   - Why is MO_WriteBarrier the only provided memory barrier? My hunch is
   that it's because this is the only sort of barrier required on x86, which
   only allows loads to be reordered with older stores, but perhaps I'm
   missing something? Is it plausible that Cmm simply needs additional barrier
   primitives to target these weaker memory models? Conversely, is there some
   property of Cmm that let's us get away without read barriers at all?

Naturally, if I've got any of this wrong or are otherwise barking up the
wrong tree, please let me know.

Thanks for all your efforts!

Travis Whitaker
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the ghc-devs mailing list