Design discussion for atomic primops to land in 7.8

Ryan Newton rrnewton at
Thu Aug 22 19:52:40 CEST 2013

Well, what's the long term plan?  Is the LLVM backend going to become the
only backend at some point?

On Thu, Aug 22, 2013 at 1:43 PM, Carter Schonwald <
carter.schonwald at> wrote:

> Hey Ryan,
> you raise some very good points.
> The most important point you raise (I think) is this:
> it would be very very nice to (where feasible) to add analogous machinery
> to the native code gen, so that its not falling behind the llvm one quite
> as much.
> at least for these atomic operations (unlike the SIMD ones),
> it may be worth investigating whats needed to add those to the native code
> gen as well.
> (adding simd support on the native codegen would be nice too, but probably
> *substantially *more work)
> On Thu, Aug 22, 2013 at 11:40 AM, Ryan Newton <rrnewton at> wrote:
>> There's a ticket that describes the design here:
>> It is a fairly simple extension of the casMutVar# that has been in since
>> 7.2.  The implementation is on the `atomics` branch currently.
>> Feel free to add your views either here or on that task's comments.
>> One example of an alternative design would be Carter's proposal to expose
>> something closer to the full LLVM concurrency ops<>
>> :
>> Schonwald <carter.schonwald at> wrote:
>>> i'm kinda thinking that we should do the analogue of exposing all the
>>> different memory model level choices (because its not that hard to add
>>> that), and when the person building it has an old version of GCC, it falls
>>> back to the legacy atomic operations?
>>> This also gives a nice path to how to upgrade to the inline asm approach.
>> These LLVM ops include many parameterized configurations of loads,
>> stores, cmpxchg, atomicrmw and barriers.  In fact, it implements much more
>> than is natively supported in most hardware, but it provides a uniform
>> abstraction.
>> My original thought was that any kind of abstraction like that would be
>> built and maintained as a Haskell library, and only the most rudimentary
>> operations (required to get access to process features) would be exposed as
>> primops.  Let's call this the "small" set of concurrent ops.
>> If we want the "big set" I think we're doomed to *reproduce* the logic
>> that maps LLVM concurrency abstractions onto machine ops irrespective of
>> whether those abstractions are implemented as Haskell functions or as
>> primops:
>>    - If the former, then the Haskell library must map the full set of
>>    ops to the reduced small set (just like LLVM does internally)
>>    - If we instead have a large set of LLVM-isomorphic primops.... then
>>    to support the same primops *in the native code backend *will, again,
>>    require reimplementing all configurations of all operations.
>> Unless... we want to make concurrency ops something that require the LLVM
>> backend?
>> Right now there is not a *performance* disadvantage to supporting a
>> smaller rather than a larger set of concurrency ops (LLVM has to emulate
>> these things anyway, or "round up" to more expensive ops).  The scenario
>> where it would be good to target ALL of LLVMs interface would be if
>> processors and LLVM improved in the future, and we automatically got the
>> benefit of better HW support for some op on on some arch.
>> I'm a bit skeptical of that proposition itself, however.  I personally
>> don't really like a world where we program with "virtual operations" that
>> don't really exist (and thus can't be *tested* against properly).
>>  Absent formal verification, it seems hard to get this code right anyway.
>>  Errors will be undetectable on existing architectures.
>>   -Ryan
>> _______________________________________________
>> ghc-devs mailing list
>> ghc-devs at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the ghc-devs mailing list