[Haskell-cafe] Backpropagation implementation for a neural net
alp at mestan.fr
Mon Jun 15 11:40:59 EDT 2009
On Mon, Jun 15, 2009 at 5:00 PM, Trin Trin <trin.cz at gmail.com> wrote:
> Hi Alp,
> - even with correctly programmed back-propagation, it is usually hard to
> make the net converge.
Yeah, I know, that's why we're training it until the quadratic error goes
> - usually you initialize neuron weights with somewhat random values, when
> working with back-propagation.
Yeah, that'll be done too, once the algorithm will be ready. I'll provide
fancy and easy functions to create a neural net just giving the numbers of
layers and their sizes.
> - do some debug prints of the net error while training to see how it is
Good idea, yeah.
> - xor function cannot be trained with a single layer neural net !!!
That's why there are 2 layers there, one hidden and the output one. I
consider the "inputs" as ... inputs, not as a first layer of the NN.
Thanls for your time. If you have any clues when reading the code, don't
hesitate of course.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe