Professional Documents
Culture Documents
NETWORKS
- INTRODUCTION -
Guedri Sofiene
sofiene,guedri@gmail.com
Overview
1. Biological inspiration
2. Artificial neurons and neural networks
3. Learning processes
4. Learning with artificial neural networks
Biological inspiration
Animals are able to react adaptively to changes in their
external and internal environment, and they use their nervous
system to perform these behaviours.
Dendrites
Axon
Biological inspiration
dendrites
axon
synapses
i 1 y
… w3
..
xn-1 . wn-1
xn wn
The McCullogh-Pitts model
Artificial neurons
y f ( x, w)
y is the neuron’s output, x is the vector of inputs, and w
is the vector of synaptic weights.
Examples: 1
y w xa
sigmoidal neuron
1 e
T
|| x w||2
ye 2a 2 Gaussian neuron
Artificial neural networks
Output
Inputs
The young animal learns that the green fruits are sour,
while the yellowish/reddish ones are sweet. The learning
happens by adapting the fruit picking behavior.
ENERGY MINIMIZATION
Output
Inputs
yout F ( x,W )
r ( x) r (|| x c ||)
|| x w||2
Example: f ( x) e 2a 2 Gaussian RBF
|| x w1,k ||2
4
y out wk2 e 2( ak ) 2
x
k 1 yout
Neural network tasks
• control
• classification These can be reformulated
in general as
• prediction
FUNCTION
• approximation
APPROXIMATION
tasks.
Task specification:
Error measure:
N
1
E
N
t
( F ( x ; W ) y t ) 2
t 1
E
wi c
j
(W )
wi j
wi j , new
wi wi
j j
Learning:
E (t ) ( w(t )T x t yt ) 2
wi (t 1) wi (t ) c wi (t ) c
wi wi
wi (t 1) wi (t ) c ( w(t )T x t yt ) xit
m
w(t ) x w j (t ) x tj
T
j 1
k 1
1 2 N
Data: ( x , y1 ), ( x , y 2 ),...,
( x , yN )
|| x t w1,k ||2
M
Error: E (t ) ( y (t ) out yt ) ( wk2 (t ) e
2 2( ak ) 2
yt ) 2
k 1
Learning: E (t )
w (t 1) w (t ) c
2 2
wi2
i i
|| x t w1,i ||2
E (t )
2 ( ai ) 2
2 ( F ( x t
,W (t )) yt ) e
wi2
with p layers 1
y k2 w 2 kT y 1 a k2
, k 1,..., M 2
yout 1 e
x y 2 ( y12 ,..., y M2 )T 2
...
1 2 … p-1 p yout F ( x;W ) w pT y p 1
1 2 N
Data: ( x , y1 ), ( x , y 2 ),...,
( x , yN )
Error: E(t ) ( y(t ) out yt ) 2 ( F ( xt ;W ) yt ) 2
M
1
yout F ( x;W ) w 2
k w1,kT x ak
k 1 1 e
Learning with general optimisation
Synaptic weight change rules for the output neuron:
E (t )
w (t 1) w (t ) c
2 2
wi2
i i
E (t ) 1
2 ( F ( x t
, W (t )) y )
wi2
t
1 ew x t ai
1,iT
E (t ) 1
2 ( F ( x t
, W (t )) y )
w1j,i w1j,i
t w x ai
1 e
1,iT t
ew x t ai
1,iT
1
w1,iT x t ai
w1j,i 1 e
w1,iT
x ai
t
1 ew
1,iT
x t ai
2
w j
1,i
w j
1,i
w1,iT x t ai x tj
ew x t ai
1,iT
w (t 1) w (t ) c 2 ( F ( x ,W (t )) yt )
1, i 1, i t
( x tj )
j j
1 e w 1,iT
x ai
t
2
New methods for learning with neural
networks
Bayesian learning:
the distribution of the neural network
parameters is learnt