memory fragmentation with ghc-7.6.1

Simon Marlow marlowsd at gmail.com
Wed Sep 26 10:28:39 CEST 2012


On 26/09/2012 05:42, Ben Gamari wrote:
> Simon Marlow <marlowsd at gmail.com> writes:
>
>> On 21/09/2012 04:07, John Lato wrote:
>>> Yes, that's my current understanding.  I see this with ByteString and
>>> Data.Vector.Storable, but not
>>> Data.Vector/Data.Vector.Unboxed/Data.Text.  As ByteStrings are pretty
>>> widely used for IO, I expected that somebody else would have
>>> experienced this too.
>>>
>>> I would expect some memory fragmentation with pinned memory, but the
>>> change from ghc-7.4 to ghc-7.6 is rather extreme (no fragmentation to
>>> several GB).
>>
>> This was a side-effect of the improvements we made to the allocation of
>> pinned objects, which ironically was made to avoid fragmentation of a
>> different kind.  What is happening is that the memory for the pinned
>> objects is now taken from the nursery, and so the nursery has to be
>> replenished after GC.  When we allocate memory for the nursery we like
>> to allocate it in big contiguous chunks, because that works better with
>> automatic prefecthing, but the memory is horribly fragmented due to all
>> the pinned objects, so the large allocation has to be satisfied from the OS.
>>
> It seems that I was bit badly by this bug with productivity being reduced to
> 30% with 8 threads. While the fix on HEAD has brought productivity back up to the
> mid-90% mark, runtime for my program has regressed by nearly 40%
> compared to 7.4.1. It's been suggested that this is the result of the
> new code generator. How should I proceed from here? It would be nice to
> test with the old code generator to verify that the new codegen is in
> fact the culprit, yet it doesn't seem there is a flag to accomplish
> this. Ideas?

I removed the flag yesterday, so as long as you have a GHC before 
yesterday you can use -fno-new-codegen to get the old codegen.  You 
might need to compile libraries with the flag too, depending on where 
the problem is.

I'd be very interested to find out whether the regression really is due 
to the new code generator, because in all the benchmarking I've done the 
worst case I found is a program that goes 4% slower, and on average 
performance is the same as the old codegen.  It is likely that by 7.8.1 
with some tweaking we should be beating the old codegen consistently.

Cheers,
	Simon




More information about the Glasgow-haskell-users mailing list