[Haskell-cafe] uvector package appendU: memory leak?

Claus Reinke claus.reinke at talk21.com
Tue Mar 31 07:24:46 EDT 2009

>> appendU is strict, insertWith just doesn't force it (follow the source link
>> in the haddocks to see why).
> Ok, I see.
> But, IMHO, this should be clearly documented.

There seems to be some agreement that strict variant operations should
also be provided, but it needs some more thinking, and whenever I look
at the code, I have the feeling that there are just too many operations
defined in it, so I look away again:-(
> However, reading the code now, I prefer my version using alter.

Yes, it is a more direct way to enforce strict evaluation. Btw, if your
update function 'f' is strict in 'old' and 'new', you might just want to 
use 'Just $! f old new' and save all those 'seq's.
> By the way, about insertWith/alter; from IntMap documentation:
> insertWithKey: O(min(n,W)
> alter: O(log n)
> So, alter is more efficient than insertWithKey?
> And what is that `W` ?

'W' is a maximum bound (see comment at top of that haddock page).
'alter' has some shortcuts if the update functions doesn't actually do
anything, but for reimplementing 'insertWithKey', the complexity 
should be the same as the code would be the same.
>> But piling up appendUs is still not a good idea. For a moment,
>> I thought that the stream representation's (+++) was handling
>> runtime fusion gracefully, but a simple test case suggests otherwise
>> at least for the simpler case of consU (the attached appendU.hs
>> doesn't do any appendUs, as the consU case already demonstrates
>> the issue; be careful with large numbers here, it'll quickly eat your ram):
> I'm not sure to fully understand the code.

Writing ':<' for 'snocU', the code for 'singletonU a1:<a2:<a3:< .. :< an'
makes a full copy of 'singletonU a1:<a2:<a3:< .. :< a(j-1)' every time it 
adds another 'aj' to the end. That can't be good for performance. The
code I provided delayed the 'consU' operations (essentially keeping a 
list of things that need to be 'consU'ed), then did them all at once at
the end (needing only one copy and allocation, for the final array).

> But, again, IMHO it does not apply to my original problem.

It is a question of resource constraints, probably. Instead of maintaining
an 'IntMap (UArr Int)' all the way through buildup, you could use an
'IntMap [Int]' while parsing, and then compress into 'IntMap (UArr Int)'
when parsing is finished. If you can afford the larger memory profile
during parsing, this should give better overall performance, with the
same memory need for the final state, but larger memory needs up
to that state. I'd be interested in the differences, if you try it.


More information about the Haskell-Cafe mailing list