[Haskell-cafe] Lazy IO and closing of file handles

Donn Cave donn at drizzle.com
Wed Mar 14 23:54:32 EDT 2007


> When using readFile to process a large number of files, I am exceeding
> the resource limits for the maximum number of open file descriptors on
> my system.  How can I enhance my program to deal with this situation
> without making significant changes?

I note that if you use mmap(2) to map a disk file into virtual memory,
you may close the file descriptor afterwards and still access the data.
That might also be relatively economical in other respects.  Pardon me
if this has already been suggested.

I should know how to make ByteStrings from offsets into a mapped region,
but all I can say right now is, I'm pretty sure that would be no problem.
Don't give them any finalizer.

If you need to eventually unmap the files, that would be a problem, but
I think if they're mapped right, you won't need that.  They obviously
have their own backing store, and if you just pretend they're unmapped,
the host virtual memory management should ideally let you get away with
that.  And file data will be contiguous and sequential in memory, which
in principle ought to be optimal for memory resources.

	Donn Cave, donn at drizzle.com


More information about the Haskell-Cafe mailing list