[Haskell-cafe] GHC 7.0.1 developer challenges
John D. Ramsdell
ramsdell0 at gmail.com
Fri Dec 17 23:26:26 CET 2010
You might like to read about free and reclaimable memory on Linux
systems. I recommend that you go
http://linuxdevcenter.com/pub/a/linux/2006/11/30/linux-out-of-memory.html
and run the C programs that are included in the article. Another good
way to learn about Linux memory is to Google with the search keys of
"linux free and reclaimable memory /proc/meminfo". The results will
contain many URLs of interest.
John
On Fri, Dec 17, 2010 at 3:03 AM, Ketil Malde <ketil at malde.org> wrote:
> "John D. Ramsdell" <ramsdell0 at gmail.com> writes:
>
>>> In absence of any explicit limits, I think a sensible default is to set
>>> maximum total memory use to something like 80%-90% of physical RAM.
>
>> This would be a poor choice on Linux systems. As I've argued
>> previously in this thread, the best choice is to limit the GHC runtime
>> to the free memory and the reclaimable memory of the machine.
>
> Well - it depends, I think. In principle, I would like to be
> conservative (i.e. set the limit as high as possible), since a too low
> limit could possibly make my program fail.
>
>> On the laptop I'm using right now, physical memory is 1G. Free memory
>> is 278M, and free plus reclaimable memory is 590M. I'm just running
>> Firefox and X, so the OS as allocated a lot of memory to caches.
>
> But lots of the memory in use is likely to be inactive (not in the
> current working set of any application), and will be pushed to swap if
> you start asking for more. Which is often what you want.
>
> If I interpret these numbers correctly, my laptop is using 1.5G on stuff
> that is basically idle - word processor documents, PDF displayers, a ton
> of web pages (with all the flash carefully filtered out), emacs buffers,
> a half-finished inkscape graphic, and so on. Most of this could easily
> go to swap.
>
>> Note that if you limit the GHC runtime to free plus reclaimable
>> memory, and some other process is chewing up memory, the GHC limit
>> would be small.
>
> Or if you run two copies of your program - then one would get all the
> memory, and the other none.
>
>> But this would ensure both do not thrash, a good thing, right?
>
> Unless the second program actually *needs* the memory.
>
> So I still think the 80% rule is pretty good - it's simple, and
> although it isn't optimal in all cases, it's conservative in that any
> larger bound is almost certainly going to thrash.
>
> You could probably invent more advanced memory behavior on top of that,
> say switching to compacting GC if you detect thrashing.
>
> -k
> --
> If I haven't seen further, it is by standing in the footprints of giants
>
More information about the Haskell-Cafe
mailing list