[Haskell-cafe] Memory Leak - Artificial Neural Network

Hector Guilarte hectorg87 at gmail.com
Thu Nov 5 08:29:27 EST 2009


Hi Luke,

The code is mainly in Spanish with son parts in English...

Thanks for the explanation, I got the idea very well, but now I got some questions about that.

How does the Prelude functions for managing lists work? I mean, what does zip, unzip, foldl, foldr, map and zipWith do? Tail recursion or corecursion? I know, thanks to the profiling I did, that my main memory leak is in the function "entrenamiento" (which means training in Spanish), and I hardly believe it is in when I use of those functions I mentioned before, so if they are tail recursive and I change them to my own corecursive version, maybe I'll get rid of the problem, won't I?


Thanks,

Hector Guilarte
-----Original Message-----
From: Luke Palmer <lrpalmer at gmail.com>
Date: Thu, 5 Nov 2009 04:24:44 
To: Hector Guilarte<hectorg87 at gmail.com>
Cc: <haskell-cafe at haskell.org>
Subject: Re: [Haskell-cafe] Memory Leak - Artificial Neural Network

On Thu, Nov 5, 2009 at 3:54 AM, Hector Guilarte <hectorg87 at gmail.com> wrote:
> entrenamiento:: ANN -> [[Float]] -> [Float] -> [Float] -> Int -> (ANN,Float)
> entrenamiento red__ accum 0 =
>     let squaredErrors = foldl' (+) 0 (map (**2) accum)
>     in (red,squaredErrors)
> entrenamiento red ejemplos esperados accum epoch =
>     let redInicializada = iniciarXsRed red (head ejemplos)
>         redEvaluada = evaluarRed redInicializada
>         redAjustada = ajustarPesos redEvaluada (head esperados)
>         error = (head esperados) - (resultadoRed1Output redAjustada)
>     in entrenamiento redAjustada (tail ejemplos) (tail esperados) (accum ++
> [error]) (epoch-1)

Well, I don't speak spanish (portuguese?), which makes this especially
hard to read.  But just from your introductory paragraph, maybe I can
give you a few hints.  They probably won't be able to fix your
program, just treat them as things to keep in mind in the future.

When I write in Haskell, my functions are usually *not* tail
recursive.  Tail recursion is good when you are reducing to a flat,
strict domain (Int, Bool, ...), but when you are building up
inductive, lazy structures, the relevant term is *corecursion* (IIRC),
which is a whole different thing.

Take eg. a tail recursive map function on lists:

map f = go []
    where
    go accum [] = accum
    go accum (x:xs) = go (accum ++ [f x]) xs

map f [1,2,3] will reduce like this before anything else:

map f [1,2,3]
go [] [1,2,3]
go ([] ++ [1]) [2,3]
go (([] ++ [1]) ++ [2]) [3]
go ((([] ++ [1]) ++ [2]) ++ [3]) []
(([] ++ [1]) ++ [2]) ++ [3]

If the tail recursion has saved any stack space, it has paid for it in
heap space.  (Additionally, the way ++ is used here and in the code I
quoted causes quadratic time behavior, becuase ++ is linear in the
length of its left argument).

The corecursive way to write map is the canonical example:

map f (x:xs) = f x : map f xs

Notice how the recursive call to map is "under" the (:) constructor?
The new structure goes on the outside of the recursive call, not
passed as an argument.  IOW, we can generate some of the output
without looking at all of the input.  And this has very good behavior:

map f [1,2,3]
f 1 : map f [2,3]   -- and when you get around to evaluating the tail....
... : f 2 : map f [3]  -- ditto
... : f 3 : map f []  -- ditto
... : []

I used ...s to emphasize that we could have forgotten about of the
head of the list by now, only processing its tail, so it can be
garbage collected.  map has constant space complexity, in some sense.

The art of programming corecursively is one of the joys of Haskell,
but if you are used to either imperative programming or strict
functional programming (basically... any other language at all), it
takes time to get the hang of.

Luke


More information about the Haskell-Cafe mailing list