[Haskell-cafe] Memory Leak - Artificial Neural Network
wren ng thornton
wren at freegeek.org
Thu Nov 5 22:00:44 EST 2009
Hector Guilarte wrote:
> Hi Luke,
> The code is mainly in Spanish with son parts in English...
> Thanks for the explanation, I got the idea very well, but now I got some questions about that.
> How does the Prelude functions for managing lists work? I mean, what does zip, unzip, foldl, foldr, map and zipWith do? Tail recursion or corecursion? I know, thanks to the profiling I did, that my main memory leak is in the function "entrenamiento" (which means training in Spanish), and I hardly believe it is in when I use of those functions I mentioned before, so if they are tail recursive and I change them to my own corecursive version, maybe I'll get rid of the problem, won't I?
Don't worry about the Prelude definitions, by and large they're the
"right" definitions. If you're curious then just search for Prelude.hs
on your system, or check it out online.
As a more general high-level suggestion, the most efficient way to
implement feedforward ANNs is to treat them as matrix multiplication
problems and use matrices/arrays rather than lists. For a three layer
network of N, M, and O nodes we thus:
* start with an N-wide vector of inputs
* multiply by the N*M matrix of weights, to get an M-vector
* map sigmoid or other activation function
* multiply by the M*O matrix of weights for the next layer to get
* apply some interpretation (e.g. winner-take-all) to the output
There are various libraries for optimized matrix multiplication, but
even just using an unboxed array for the matrices will make it much
faster to traverse through things.
See the "Source Code" link at the top of the page.
More information about the Haskell-Cafe