Design discussion for atomic primops to land in 7.8

Ben Lippmeier benl at
Mon Aug 26 09:17:59 CEST 2013

> > Well, what's the long term plan?  Is the LLVM backend going to become the only backend at some point?
> I wouldn't argue against ditching the NCG entirely. It's hard to justify fixing NCG performance problems when fixing them won't make the NCG faster than LLVM, and everyone uses LLVM anyway.
> We're going to need more and more SIMD support when processors supporting the Larrabee New Instructions (LRBni) appear on people's desks. At that time there still won't be a good enough reason to implement those instructions in the NCG.
> I hope to implement SIMD support for the native code gen soon. It's not a huge task and having feature parity between LLVM and NCG would be good. 

Will you also update the SIMD support, register allocators, and calling conventions in 2015 when AVX-512 lands on the desktop? On all supported platforms? What about support for the x86 vcompress and vexpand instructions with mask registers? What about when someone finally asks for packed conversions between 16xWord8s and 16xFloat32s where you need to split the result into four separate registers? LLVM does that automatically.

I've been down this path before. In 2007 I implemented a separate graph colouring register allocator in the NCG to supposably improve GHC's numeric performance, but the LLVM backend subsumed that work and now having two separate register allocators is more of a maintenance burden than a help to anyone. At the time, LLVM was just becoming well known, so it wasn't obvious that implementing a new register allocator was a largely a redundant piece of work -- but I think it's clear now. I was happy to work on the project at the time, and I learned a lot from it, but when starting new projects now I also try to imagine the system that will replace the one I'm dreaming of.

Of course, you should do what interests you -- I'm just pointing out a strategic consideration.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the ghc-devs mailing list