Lazy streams and unsafeInterleaveIO

Jyrinx jyrinx_list@mindspring.com
Tue, 24 Dec 2002 01:25:14 -0800


Glynn Clements wrote:

>Jyrinx wrote:
>
>  
>
>>So is this lazy-stream-via-unsafeInterleaveIO not so nasty, then, so 
>>long as a few precautions (not reading too far into the stream, 
>>accounting for buffering, etc.) are taken? I like the idiom Hudak uses 
>>(passing a stream of I/O results to the purely functional part of the 
>>program), so if it's kosher enough I'd like to get hacking elsewhere ...
>>    
>>
>
>It depends upon the amount and the complexity of the program's I/O,
>and the degree of control which you require. For a simple stream
>filter (read stdin, write stdout), lazy I/O is fine; for a program
>which has more complex I/O behaviour, lazy I/O may become a nuisance
>as the program grows more complex or as you need finer control.
>
>If you just wanted a getContents replacement with a prompt, the
>obvious solution would be to use unsafeInterleaveIO just to implement
>that specific function.
>  
>
Well, yeah - but I don't want to get into the habit of using the 
unsafe*IO stuff when it just seems convenient. This way, I know 
specifically why I need it, and can encapsulate its use in a small 
library with predictable results (i.e. I can separate concerns).

>The main problems with lazy I/O are the lack of control over ordering
>(e.g. you can't delete the file until a stream has been closed, but
>you may not be able to control how long the stream remains open) [...]
>
Wait ... but the Library Report (11.2.1) says that, after a call to 
hGetContents (which I assume getContents is based on), the file is 
"semi-closed," and a call to hClose will indeed then close it ...

>[...] and
>the inability to handle exceptions (the actual exception won't occur
>until after e.g. getContents has returned).
>
But how does this differ from strict I/O? I mean, say there's a disk 
error in the middle of some big file I want to crunch. Under traditional 
I/O, I open the file and proceed to read each piece of data, process it, 
and continue to the next one, reading the raw data only as I need it. 
When I hit the error, an exception will be thrown in the middle of the 
operation. In lazy I/O, I might use getContents to get all the 
characters lazily; the getContents call will read each piece of data as 
it's needed in the operation - in other words, the data is read as the 
program uses it, just like with traditional I/O. And when the error 
occurs, the operation will be unceremoniously interrupted, again the 
same as by strict I/O. In mean, if an exception is thrown because of a 
file error, I can't hope to catch it in the data-crunching part of the 
program anyway ...

Luke Maurer
jyrinx_list@mindspring.com