[Haskell-cafe] Re: Windows vs. Linux x64
marlowsd at gmail.com
Fri Dec 12 08:57:44 EST 2008
John Meacham wrote:
> On Tue, Nov 25, 2008 at 09:39:35PM +0100, Ketil Malde wrote:
>> This corresponds to my experiences - 64 bits is slower, something I've
>> ascribed to the cost of increased pointer size.
> ghc unfortunatly also uses 64 bit integers when in 64 bit mode, so the
> cost paid is increased due to that as well, Also since each math
> instruction needs an extra byte telling it to work on 64 bit data so the
> code is less dense.
Right - in the Java world they use tricks to keep pointers down to 32-bits
on a 64-bit platform, e.g. by shifting pointers by a couple of bits (giving
you access to 16Gb). There are a number of problems with doing this in
- we already use those low pointer bits for encoding tag information.
So perhaps we could take only one bit, giving you access to 8Gb,
and lose one tag bit.
- it means recompiling *everything*. It's a complete new way, so you
have to make the decision to do this once and for all, or build all
your libraries + RTS twice. In JITed languages they can make the
choice at runtime, which makes it much easier.
- it tends to be a bit platform-specific, because you need a base
address in the address space for your 16Gb of memory, and different
platforms lay out the address space differently. The nice thing about
GHC's memory manager is that it currently has *no* dependencies on
address-space layout (except in the ELF64 dynamic linker... sigh).
- as usual with code-generation knobs, it multiplies the testing
surface, which is something we're quite sensitive to (our surface is
already on the verge of being larger than we can cope with given our
So my current take on this is that it isn't worth it just to get access to
more memory and slightly improved performance. However, perhaps we should
work on making it easier to use the 32-bit GHC on 64-bit platforms - IIRC
right now you have to use something like -opta-m32 -optc-m32.
More information about the Haskell-Cafe