How to make reading an array from disk more efficient

Hal Daume III hdaume at ISI.EDU
Wed Dec 24 13:42:59 EST 2003


(1) use unboxed arrays, otherwise you're wasting too much space with 
pointers.  that is, unless you need laziness on the elements, which i 
don't think you do based on your list

(2) (maybe) use imperative arrays; this will help you ensure that 
everything is being evaluated quickly.

On Wed, 24 Dec 2003, andrew cooke wrote:

> 
> Hi,
> 
> I have some code (http://www.acooke.org/andrew/ReadTest.hs) that reads
> data from a file (an image in ppm format; example data (256*256 pixels) at
> http://www.acooke.org/andrew/test.ppm) and stores it in an array of Word8
> values.  The aim is to read a file that contains 5000 * 5000 * 3 Word8
> values.  I think this should take under 100Mb of memory, if the Array
> implementation is efficient.  However, when I run the code on a file of
> that size it looks like it will need several days to complete.  This seems
> rather slow - the GIMP can read the same file maybe 30 seconds).
> 
> How do I make the Haskell code faster while keeping it flexible?  My
> requirements are:
> 
> - machine limited to 1Gb memory
> - display "status bar"
> - the possibility to filter the pixel stream so that the image is
> subsampled (see "everyN" in the code)
> - the possibility to filter the pixel strean so that a subsection of the
> image is selected.
> 
> All that is possible in the code I have now, but it's slow.
> 
> Thanks,
> Andrew
> 
> 

-- 
 Hal Daume III                                   | hdaume at isi.edu
 "Arrest this man, he talks in maths."           | www.isi.edu/~hdaume



More information about the Haskell mailing list