[Haskell-cafe] Haskell maximum stack depth
ahey at iee.org
Tue Feb 5 04:50:09 EST 2008
Simon Peyton-Jones wrote:
> | First bad thing:
> | Stack size (memory consumed) doubles each time it overflows.
> | Second bad thing:
> | Arbitrary limit on stack size unrelated to overall (heap) memory
> | available.
> | Third bad thing (the really bad thing):
> | If a stack has temporarily grown (to 64M say), it will never shrink
> | back down again to something more typical (< 4K say). If I understand
> | correctly, it will continue to take 64M from the heap regardless.
> | What I would like is to be able to set an upper limit on total memory
> | useage and allow the program to freely use this memory as either stack
> | or heap. At least that should be the default behaviour, but maybe
> | also allow +RTS restrictions for "debugging" (though I don't think this
> | is a very good way of investigating a programs stack use).
> | I would also like stack memory allocation to increase (and decrease :-)
> | in some sane sized linear increment, not double each time. With the
> | current scheme, as I understand it, if 65M is needed then 128M will be
> | allocated.
> Would you like to create a ticket for this?
> I don't know how many people it bites, and how often,
The problem is that the fact that it *might* bite often affects your
whole coding style (well mine actually :-) for some problems. It also
seems to have given rise to the POV that ghc's current behaviour is good
because stack use is bad. MHO is that it's only ghc's current behaviour
that *makes* stack use bad.
I think it bites a lot less often than it otherwise would because most
people will deliberately chose to use heap in preference to stack (at
least when writing "eager" code) just to avoid the problem. But it still
bites pretty often anyway with "lazy" code for unexpected reasons.
Arguably such situations are indeed a "bug" more often than not, but
I still think terminating the program unnecessarily (at 8M stack) is
bad default policy.
> Yes, this is the standard solution, and it's a good one because it has a robust cost model (no quadratic costs). However, it's tricky to get right; copying is simpler. If a significant fraction of runtime (for some interesting program(s)) turned out to be consumed by copying stacks then we could consider this.
Do you really need such evidence? If we agree that allowing stack to
grow to arbitrary (limited only by memory availability) size is
reasonable then surely we already know that there will be some stack
size for which quadratic copying cost is going to get "stupid" :-)
Of course there other possible more radical solutions that come to
mind, like not using a (C style) "stack" at all. But I guess we'd
be talking about a complete re-write of the pretty much all the
rts and much of the compiler to do this :-(
More information about the Haskell-Cafe