[Haskell-cafe] uvector package appendU: memory leak?
manlio_perillo at libero.it
Wed Apr 1 06:16:10 EDT 2009
wren ng thornton ha scritto:
> Manlio Perillo wrote:
>> Since ratings for each customers are parsed "at the same time", using
>> a plain list would consume a lot of memory, since stream fusion can
>> only be executed at the end of the parsing.
>> On the other hand, when I want to group ratings by movies, stream
>> fusion seems to work fine.
> For the problem as you've discussed it, I'd suggest a different
> approach: You can't fit all the data into memory at once, so you
> shouldn't try to. You should write one program that takes in the
> per-movie grouping of data and produces a per-user file as output.
Well, creating 480189 files in a directory is not a very nice thing to
do to a normal file system.
I should arrange files in directory, but then this starts to become too
The solution I'm using now just works.
It takes about 950 MB of memory and 35 minutes, but it's not a big
1) Once loaded, I can serialize the data in binary format
2) I think that the program can be parallelized, parsing
subsets of the files in N threads, and then merging the maps.
Using this method, should optimize array copying.
The problem is that unionWith seems to be lazy, and there is no no
strict variant; I'm not sure.
> have your second program read in the reorganized data and do fusion et al.
> This reduces the problem to just writing the PerMovie -> PerUser
> program. Since you still can't fit all the data into memory, that means
> you can't hope to write the per-user file in one go.
The data *do* fit into memory, fortunately.
> Best of luck.
More information about the Haskell-Cafe