[Haskell-cafe] Re: Difficult memory leak in array processing
apfelmus at quantentunnel.de
apfelmus at quantentunnel.de
Thu Nov 30 06:17:06 EST 2006
Duncan Coutts wrote:
> On Wed, 2006-11-29 at 20:27 +0100, apfelmus at quantentunnel.de wrote:
>
>> On the implementation level, lazy evaluation is in the way when
>> crunching bytes.
>
> Something I rather enjoyed when hacking on the ByteString lib is finding
> that actually lazy evaluation is great when crunching bytes, though you
> do need to know exactly when to use it.
>
> Lazy ByteStrings rely on lazy evaluation of course. Demanding a lazy
> ByteString alternates between strictly filling in big chunks of data in
> memory with lazily suspending before producing the next chunk.
>
> As many people have observed before, FP optimisation is to a great
> extent about thinking more carefully about a better evaluation order for
> a computation and making some bits stricter and some bits lazier to get
> that better evaluation order.
I completely agree. My statement was not well formulated, I actually
meant that the overhead implied by lazy evaluation occurring at every
single byte to be crunched is in the way. In this case, the cost is too
high to pay off as the bytes are most likely consumed anyway. The
detailed account keeping about every byte ("is it _|_ or not?") is
unnecessary for a (map) which invariably does look at every byte. The
situation is already different for a (fold), though:
any p = foldr (\x b -> p x `or` b) False
Here, the computation may stop at any position in the list.
In a sense, lazy ByteStrings just reduce the "cost of lazy evaluation" /
byte ratio by grouping bytes strictly. Bookkeeping becomes cheaper
because one doesn't look up so often. Of course, with a stricter fold,
(any) gets more costly. The aim is to make the former ratio smaller
while not raising the latter too much. One may say that ByteString makes
explicit what the "Optimistic Haskell Compiler" aimed to make implicit.
IMHO, lazy evaluation is always the better choice (in theory). In
practice, the only problem about lazy evaluation is the overhead (which
hurts mostly at (large -> small)) which is *not* a consequence of "no
free lunch" but stems from the fact that current machine architecture is
not very friendly to purely functional things. In a sense, the natural
complexity measure in Haskell is the number of reductions in "hugs +s"
whereas the natural complexity measure on RAM machines is the number of
operations in 0xDEADBEAF-arithmetic. Unfortunately, it's the latter
which is inside Intel.
Regards,
apfelmus
More information about the Haskell-Cafe
mailing list