[Haskell-beginners] I need advice on design

Hector Guilarte hectorg87 at gmail.com
Tue Dec 29 22:58:08 EST 2009


Sorry for my late answer... Check this e-mail wren ng thornton sent to me (and the list) on Nov 5, 2009. Sorry I'm not giving you the direct address to the whole discusion, but I'm sending you this from my cellphone, and the e-mail too... The original discusion was on the haskell-cafe mailing list and the title was: Memory Leak - Artificial Neural Network

Also, there's a package somebody uploaded a few days ago to hackage on ann's, it is called: hnn-0.1, a haskell neural network library.
 
I hope this can be usefull to you.

Hector Guilarte
Here's wren ng thornton e-mail:
As a more general high-level suggestion, the most efficient way to 
implement feedforward ANNs is to treat them as matrix multiplication 
problems and use matrices/arrays rather than lists. For a three layer 
network of N, M, and O nodes we thus:
     * start with an N-wide vector of inputs
     * multiply by the N*M matrix of weights, to get an M-vector
     * map sigmoid or other activation function
     * multiply by the M*O matrix of weights for the next layer to get 
an O-vector
     * apply some interpretation (e.g. winner-take-all) to the output

There are various libraries for optimized matrix multiplication, but 
even just using an unboxed array for the matrices will make it much 
faster to traverse through things.

-----Original Message-----
From: Amy de Buitléir <amy at nualeargais.ie>
Date: Mon, 28 Dec 2009 17:14:53 
To: <beginners at haskell.org>
Subject: [Haskell-beginners] I need advice on design

I'm building a library of components for artificial neural networks.
I'm used to object-oriented languages, so I'm struggling a bit to
figure out how to do a good design in a functional programming
language like Haskell.

Q1: I've come up with two designs, and would appreciate any advice on
improvements and what approach to take.

===== Design #1 =====
class Neuron n where
  activate :: [Double] -> n -> Double
  train :: [Double] -> Double -> n -> n

....and then I would have instances of this typeclass. For example:

data Perceptron = Perceptron {
        weights :: [Double],
        threshold :: Double,
        learningRate :: Double
      } deriving (Show)

instance Neuron Perceptron where
  activate inputs perceptron = ...
  train inputs target perceptron = ...

The disadvantage of this approach is that I need to define and name
each instance of neuron before I can use it. I'd rather create a
neuron on-the-fly by calling a general-purpose constructor and telling
it what functions to use for activation and training. I think that
would make it easier to re-use activation and training functions in
all sorts of different combinations. So I came up with...

===== Design #2 =====
data Neuron =
    Neuron {
        weights :: [Double],
        activate :: [Double] -> Double,
        train :: [Double] -> Double -> Neuron
      }

Q2: I thought there might be some way to define a function type, but
the following doesn't work. Is there something along these lines that
would work?

type activationFunction = [Double] -> Double
_______________________________________________
Beginners mailing list
Beginners at haskell.org
http://www.haskell.org/mailman/listinfo/beginners


More information about the Beginners mailing list