Professional Documents
Culture Documents
()
) (1)
In this equation, a and b define the degree of scaling and shifting of the mother wavelet
respectively. The scaling and shifting value a and b are chosen as power of 2 which is
known as dyatic analysis for DWT. The coefficients of expansion C (a, b) for a particular
signal can be expressed as
C (a, b) = { a, b (x) ()
() (
) (2)
These coefficients give a measure of how closely correlated the modified mother wavelet is
with the input signal. In equation (2) I(x) represent the impulse signal of CUT and the
integration is performed over all possible x values where x is time. Wavelet analysis in its
discrete (dyatic) form assumes a=
and b=k
)}
)}
(3)
S2-Feature class/fault class.
The error value is calculated from this equation and weights are adjusted in the neural
network to minimize it. This is done by adjusting the weights in the direction of negative
gradient (slope) of E.
The steps involved in the training of the neural network using back propagation algorithm
are,
1) The features from the training set are propagated through the neural network.
2) The output is calculated from the sum of the weighted inputs and sigmoid function of
the hidden layer which acts activation function. Activation function defines the output
for the given input.
(4)
(5)
(6)
(7)
From (4)
=y (1-y) (8)
From (3)
={
)} (9)
From (3, 4, 5)
)}*(y (1-y))*
()
5) The change in weight is calculated from the following equation
(10)
is the learning rate which has the value between 0 and 1. Learning rate is generally
the fraction of error that is removed. Choosing the value of learning rate plays
important role. If chosen low value, then the time to learn the weights will be too
long. If chosen a high value, the algorithm tends to oscillate.
6) The output is calculated using the updated weights and in turn error is calculated and
the weights are adjusted again to minimize the error
7) This process is continued until the neural network output fault class is equal to desired
fault class so that the