storing highly shared data structures
Simon Marlow
simonmar at microsoft.com
Wed Jan 11 06:54:37 EST 2006
Bulat Ziganshin wrote:
> Hello Simon,
>
> Tuesday, January 10, 2006, 12:26:30 PM, you wrote:
>
>
>>>>CM> My old version is faster, because the version with makeStableName
>>>>does
>>>>CM> very much GC.
>>>>
>>>>CM> MUT time 27.28s ( 28.91s elapsed)
>>>>CM> GC time 133.98s (140.08s elapsed)
>>>>
>>>>try to add infamous "+RTS -A10m" switch ;)
>
>
> SM> The real problem seems to be that minor GCs are taking too long. Having
> SM> said that, you can usually reduce GC overhead with large -A or -H options.
>
> it is the same problem as with scanning large IOArrays on each GC.
Actually I think it's a different problem, with the same workaround.
> in this case, i think, table of stable names scanned each time and
> therefore program works so slow. if ghc will dynamically increase
> "+RTS -A" area when there are a large IOArrays/stable names table,
> this would help improve speed significantly.
I'm not keen on this, because its a hack and doesn't address the real
cause of the problem. We're working on addressing the array issue, see
this ticket I created today:
http://cvs.haskell.org/trac/ghc/ticket/650
> also, i want to remind my
> old suggestion to temporarily disable all GCs, or better to
> temporarily change "-A" area at the programs request. That will help
> programs like Joels one and my own, where large data structures (say,
> 5 or 20 mb) are created, used and then completely discarded
You can change the allocation area size from within a program quite
easily. Write a little C function to assign to
RtsFlags.GcFlags.minAllocAreaSize (#include "RtsFlags.h" first), and
call it from Haskell; the next time GC runs it will allocate the larger
nursery. Please try this and let me know if it works.
Cheers,
Simon
More information about the Glasgow-haskell-users
mailing list