You are on page 1of 18

ARTIFICIAL INTELLIGENCE AND NEURAL NETWORK

APPLICATIONS IN POWER SYSTEMS

Document By
SANTOSH BHARADWAJ REDDY
Email: help@matlabcodes.com
Engineeringpapers.blogspot.com
More Papers and Presentations available on above site

ABSTRACT: each of which may have an effect on


The electric power industry is the security of the system. Neural
currently undergoing an networks have shown great promise
unprecedented reform, ascribable to, for their ability to quickly and
one of the most exciting and accurately predict the system
potentially profitable recent security when trained with data
developments in increasing usage of collected from a small subset of
artificial intelligence techniques. The system variables.
artificial neural network approach has The intention of this paper is to
attracted number of applications give an overview of application of
especially in the field of power artificial intelligence and neural
system since it is a model free network (NN) techniques in power
estimator. Neural networks provide systems to prognosticate load on
solutions to very complex and power plant and contingency in case
nonlinear problems. Nonlinear of any unexpected outage. In this
problems, like load forecasting that paper we present the key concepts of
cannot be solved with standard artificial neural networks, its history,
algorithms but can be solved with a imitation of brain neuron’s
neural network with remarkable architecture and finally the
accuracy. Modern interconnected applications (load forecasting and
power systems often consist of contingency analysis). The
thousands of pieces of equipment applications of artificial intelligence
in areas of load forecasting by error control, computer based Energy
Backpropagation learning algorithm Management Systems are now
and contingency analysis based on widely used in energy control
Quality index have been centers. The abnormal modes of
perspicuously explained. system operation may be caused by
INTRODUCTION: network faults, active and reactive
Modern power systems are required power imbalances, or frequency
to generate and supply high quality deviations. An unplanned Operation
electric energy to customers. To may lead to a mal or a complete
achieve this requirement, computers system blackout. Under these
have been applied to power system emergency situations, power
planning, monitoring and control. systems are restored back to the
Power system application programs normal state according to decisions
for analysing system behaviours are made by experienced operation
stored in computers. In the planning engineers. There is also a need to
stage of a power system, system develop fast and efficient methods
analysis programs are executed for the prediction of abnormal
repeatedly. Engineers adjust and system behaviour.
modify the input data to these Artificial intelligence (AI)
programs according to their has provided techniques for
experience and heuristic knowledge encoding and reasoning with
about the system until satisfactory declarative knowledge. The advent
plans are determined. of neural networks (NN"s), in
For sophisticated approaches addition, provides neural network
to system planning, development of modules which can be executed in
methodologies and techniques to an online environment. These new
incorporate practical knowledge of techniques supplement conventional
planning engineers into programs computing techniques and methods
which also include the numerical for solving problems of power
analysis programs are needed. In the system planning, operation and
area of power System monitoring and control.
Areas of Applications: Unit commitment, Maintenance
Possible applications of artificial scheduling, Load forecasting.
intelligence in power system planning
and operation were investigated by
power titles and researchers. In the 1. Artificial Neural Networks
last decade, many artificial
1.1 What is a Neural Network?
intelligence systems and expert
systems have been built for solving
An
problems in different areas within the
Artificial
field of power systems only. These
Neural
areas are summarized below.
Network
(ANN) is an
information
processing paradigm that is inspired
System planning
by the way biological nervous
Transmission planning and design,
systems, such as the brain, process
Generation expansion, Distribution
information. The key element of this
planning.
paradigm is the novel structure of
the information processing system.
System Analysis
It is composed of a large number of
Loadflow engine, Transient stability
highly interconnected processing
elements (neurons) working in
System Operation h Monitoring
unison to solve specific problems.
Alarm processing, Fault diagnosis,
ANNs, like people, learn by
Substation monitoring, System and
example. An ANN is configured for
network restoration, Load shedding,
a specific application, such as
Voltage / reactive power control,
pattern recognition or data
Contingency selection, Network
classification, through a learning
switching, Voltage collapse.
process. Learning in biological
systems involves adjustments to the
Operational Planning
synaptic connections that exist
between the neurons. This is true of is a specialized cell which can
ANNs as well. propagate an electrochemical signal.
The neuron has a branching input
structure (the dendrites), a cell body,
and a branching output structure (the
axon). The axons of one cell connect
to the dendrites of another via a
synapse. When a neuron is activated,
it fires an electrochemical signal
along the axon. This signal crosses
the synapses to other neurons, which
may in turn fire. A neuron fires only
if the total signal received at the cell
The major breakthrough in the body from the dendrites exceeds a
field of ANN occurred with the certain level (the firing threshold).
invention of Backpropagation
algorithm which enabled design and To capture the essence of biological
learning techniques of multilayered neural systems, an artificial neuron
neural networks. Since then the is defined as follows:
development and areas of application
It receives a number of inputs
in which ANN is applied has been
(either from original data, or from
thriving.
the output of other neurons in the
1.3 Biological inspiration: neural network). Each input comes
via a connection that has a strength
The brain is principally composed (or weight); these weights
of a very large number (circa correspond to synaptic efficacy in a
10,000,000,000) of neurons, biological neuron. Each neuron also
massively interconnected (with an has a single threshold value. The
average of several thousand weighted sum of the inputs is
interconnects per neuron, although formed, and the threshold
this varies enormously). Each neuron subtracted, to compose the
activation of the neuron (also known algorithmic approach i.e. the
as the post-synaptic potential, or PSP, computer follows a set of
of the neuron). instructions in order to solve a
problem.
• The activation signal is passed
through an activation function Neural networks process
(also known as a transfer information in a similar way the
function) to produce the output of human brain does. Neural networks
the neuron. learn by example. They cannot be
programmed to perform a specific
task. On the other hand,
conventional computers use a
cognitive approach to problem
solving; the way the problem is to
solved must be known and stated in
small unambiguous instructions.
These instructions are then
If a network is to be of any use, there converted to a high level language
must be inputs (which carry the program and then into machine code
values of variables of interest in the that the computer can understand.
outside world) and outputs (which
form predictions, or control signals). Neural networks and conventional

The input, hidden and output neurons algorithmic computers are not in

need to be connected together. competition but complement each


other. Even more, a large number of
1.4 Neural networks versus tasks, require systems that use a
conventional computers combination of the two approaches
(normally a conventional computer
Neural networks take a different
is used to supervise the neural
approach to problem solving than that
network) in order to perform at
of conventional computers.
maximum efficiency.
Conventional computers use an
1.5 Features function (transfer function) that is
1. High computational rates specified for the units. This function
due to the massive parallelism. typically falls into one of three
2. Fault tolerance. categories:
3. Training the network
Linear (or ramp)
adopts itself, based on the
information received from the Threshold
environment.
4. Programmed rules are Sigmoid
not necessary.
For linear units, the output activity
5. Primitive computational
is proportional to the total weighted
elements.
output.

For threshold unit, the output is set


1.6 The Learning Process
at one of two levels, depending on
Supervised learning which whether the total input is greater
incorporates an external teacher, so than or less than some threshold
that each output unit is told what its value.
desired response to input signals
For sigmoid units, the output varies
ought to be. During the learning
continuously but not linearly as the
process global information may be
input changes. Sigmoid units bear a
required.
greater resemblance to real neurons
Unsupervised learning uses no than do linear or threshold units, but
external teacher and is based upon all three must be considered rough
only local information. It is also approximations.
referred to as self-organisation
2. APPLICATIONS
1.7 Transfer Function
App1: Power Systems Load
The behaviour of an ANN depends on
Forecasting
both the weights and the input-output
difficulty to find functional
Commonly and popular problem that relationship between all attribute
has an important role in economic, variable and instantaneous load
financial, development, expansion demand, difficulty to upgrade the set
and planning is load forecasting of of rules that govern at expert system
power systems. Generally most of the and is ability to adjust themselves
papers and projects in this area are with rapid nonlinear system-load
categorized into three groups: changes. The NNs can be used to
solve these problems. Most of the
‫ﻢ‬ Short-term load forecasting projects using NNs have considered
(STLF) over an interval many actors such as weather
ranging from an hour to a condition, holidays, weekends and
week is important for various special sport matches days in
applications such as unit forecasting model, successfully.
commitment, economic This is because of learning ability of
dispatch, energy transfer NNs with many input factors.
scheduling and real time ‫ﻢ‬ Mid-term load
control. A lot of studies have forecasting(MTLF) that
been done for using of short- range from one month to five
term load forecasting with years, used to purchase
different methods. One of enough fuel for power plants
these methods may be after electricity tariffs are
classified as follow: calculated.
Regression model, Kalman filtering, ‫ﻢ‬ Long-term load forecasting
Box & Jenkins model, Expert (LTLF), covering from 5 to
systems, Fuzzy inference, Neuro 20 years or more, used by
fuzzy models and Chaos time series planning engineers and
analysis. economists to determine the
Some of these methods have type and the size of
main limitations such as neglecting of generating plants that
some forecasting attribute condition,
minimize both fixed and discrete time series over forecasting
variable costs. intervals.
Standard Load :-The
standard load curve is
produced once a day . It
needs rescaling over time .
The standard load
characterizes the base load .
It is calculated by using
historical load data .The
standard load calculation
can be divided in two
parts. The first one makes
an average using all
common days in the same
2.1.1 Overview of STLF period. The holidays are included
Techniques:- with Saturdays and Mondays. The
A wide variety of second part investigates on the
techniques/algorithms for STLF have particular characteristic for each day
been reported in the literature (These of the week, separately. For this a
procedures typically make use of two simple weighted moving average is
basic models peak load models and made.
load shape modes.
Standard Load Concept :- (Load Residual/Deviation Load:-The
Shape Model) residual load is used to represent the
The load forecasting is divided into most recent variation of the load .
two general parts; peak load model This value contains information for
and load shape model .Former deals last 3 hours. Auto regressive and
with daily or weekly peak load exponential smoothing are the most
modeling & later describes load a common methods used to calculate
the deviation of load value.
desired output for training the
2.1.2 Artificial Neural network network.
based short term load forecasting: An initial input data set is
The development of an ANN presented to ANNSTLF which
based STLF model is divided into two adjusts the weight values for a
processes, the "learning phase" and minimum error. Following a new
the "recall phase". In learning phase, input data set is presented and the
the neurons are trained using weight values are adjusted in
historical input &output data and accordance .The process finishes
adjustable weights are gradually when the difference between target
optimized to minimize the difference output and the found output for all
between the computed and desired the input sets is close to zero. The
output. The ANN allows outputs to be feed forward Multilayer Perceptron
calculated based on some form of (MLP) neural network model is used
experiences, rather than for implementing the STLF model
understanding the connection between (ANNSTLF). Fig5 shows a MLP
input and output (or cause and effect). with single hidden 1ayer.Tlie
In recall phase the new input data is advantage of this model is that it is
applied to the network & and its able to learn highly non-linear
outputs are computed and evaluated mappings. The MLP model is
for testing purpose. In the ANN based trained by standard backpropagation
STLF model, a layered ANN training algorithm and developed by
structure (Input layer, Hidden layer, Rumelhart.
Output layer) is used. In this method
the weights are calculated by a
learning process using error
propagation in parallel distributed
processing. The STLF problem is
formulated with the past data as the
input data and the latest data are the
2.1.3 Multilayer Perceptron and
Its application in load forecasting:- instances ; which is achieved , thus
The multi layer Perceptron and the avoiding overloading of network ,
associated backpropagation algorithm by terminating Learning once a
proposed a sound method to train performance pattern has been
networks having more than two layers reached . The backpropagalion
of neurons. The learning rule is network is capable of approximating
known as Backpropagation which is a arbitrary mappings given a net of the
gradient decent technique with output.
backward error(gradient) propagation The name backpropagation comes
is depicted in Fig.6. The back from the fact that the error (gradient)
propagation network in essence learns of hidden units are derived from
a mapping from a set of input patterns propagating backward the errors
(e.g. extracted features) to, a set of associated with the output Units
output patterns ( e.g. class since the target values for hidden
information ) .This network can be units are not given or it is defined to
designed and trained to accomplish a obtain the values of the desired
wide variety of mappings. This ability output at hidden layer.
comes from the nodes in hidden layer
or layer of the network which learns
to respond to features found in input
pattern. The features recognized or
extracted by the hidden units ( nodes)
correspond to the correlation of
activity among different input units.
As the network is trained with
different examples, the network has
the ability to generalize over similar
features found in different patterns.
The hidden unit (nodes)must be
Error Backpropagation:-
trained to extract a sufficient set of
The back propagation (or backup)
general features applicable to
algorithm is a generalization of the
Widrow Hoff error correction rule. In
the Widrow-Hoff technique an error
which is the difference between what
the output is and what it is supposed
to be is formed and the synaptic
strength is changed in proportion to
error times the input signal in a
direction which reduces the error. The
direction of change in weights is such
that the error will reduce in the
direction of the gradient (the direction
of most rapid change of the error).
This type of learning is also called
gradient search. In the case of 2.1.4 The Application of ANN to
multilayer networks, the problem is STLF & Results:-
much more difficult. The ANNSTLF implements
Choice of activation function multilayer feed forward neural
The most common activation network which was trained by using
function used in multilayer perceptron backpropagation training algorithm.
is the sigmoid. The equation of the Naturally 24 hours data points leads
sigmoid function is to 24 input nodes in MLP model.
Here 2 hidden layers are considered.
MSEB data for the period Oct 94 to
June 95 i.e of 35 weeks for
The back propagation algorithm for development and implantation of the
network using the sigmoid 88 software was utilized.
activation function is described below
.The equation of sigmoid function is
Written as
safe operation of electrical energy
networks. During the steady state
study of an electrical network any
one of the possible contingencies
can have either no effect, or serious
effect, or even fatal results for the
The Backpropagation network safety, depending on a
algorithm with MLP model of given network operating state.
Artificial Neural Network (A") is Load flow analysis can be used
developed for the problem of short as a crisp technique for contingency
Term Load Forecasting (STLF) with risk assessment. However
a lead time of at least 24 hours. The performing at run time the necessary
best performance was obtained for the load flow analysis studies is a
load forecasting for the Tuesday tedious and time consuming
which gives the maximum and operation. An alternative solution is
average percentage error of 2.00% the off-line training and the run-time
and 0.20% respectively. This comes application of artificial neural
very close to the precision obtained networks. This article aims at
by the human forecaster. The turning describing how artificial neural
of the gain, momentum terms networks can be used to bypass the
selection of the weights and threshold traditional load flow cycle, resulting
values play key role in convergence in significantly faster computation
of the network. High values of the times for online contingency
weights lead to the divergence and analysis. A discussion over the
generally small values of the order of efficiency of the proposed
10-2 the yield better results. techniques is also included.
App2: Power System Contingency 2.2.1 What is contingency in
Analysis power system?
system contingency is defined
Contingency analysis and risk as a disturbance that can occur in the
assessment are important tasks for the network and can result in possible
loss of parts of the network like on all operating points found in the
buses, lines, transformers, or power database and then a power flow
units in any of the network areas. solution is attempted on the
Load flow analysis is an adequate network. According to the results of
ans for studying the effect of a the power flow solution the
possible contingency on a given contingency applied of the specific
operating point of the network. It is operating point can be ranked as
often the case that experienced “innocent”, “violating”, or
engineers, involved in operation of a “diverging / serious”. The pre-
given system, can guess effectively contingency operating point
contingency without the support of parameters, various operating point
numerical computations. This indices and metrics, the contingency
intuition of the operators is useful in and the power flow result are next
supporting the initial selection of a stored in a table per contingency.
list of possible contingencies, which This contingency table constitutes a
then will be analysed using the set of features and tuples that can be
described here technique. considered as suitable neural
network input layer data elements if
2.2.2. System Architecture selected in any combination and
A suitable way of studying the effects after being statistically normalized.
of contingencies on an electrical The power flow solution classifying
network is through the definition of any contingency for any operating
representative operating points, point, is the output layer value of the
creation of a relevant data base, in neural network.
which parameters relating to these
operating points is stored as these Neural network training is a
have been measured directly through computer intensive work that needs,
network snapshots. Once a number of however, to be done only once. As
operating points is simulated, a list of soon as the neural network is trained
contingencies to be studied upon is for a contingency, the predictions
formed. Each contingency is applied about the effects of a contingency on
any operating point can easily be cumulative active load, active
deduced. The efficiency of the generation, reactive load, reactive
predictions depends on various generation, apparent power etc. In
factors such as the quality and the recent bibliography there are
quantity of the training features, the references in more elaborate
type, complexity and connectivity of aggregates that yield better results
the neural network. when applied, such as the active
2.2.4 Neural Network Input apparent power margin index
Feature Selection (expressed as the fraction of the
flowing aggregate apparent power,
over the aggregate MVA line
transmission limits) and the voltage
stability index. The voltage stability
index is computed as “the
sensitivities of the total reactive
power generation to a reactive
power consumption, known as
‘reactive power dispatch
coefficients’ ”.

Neural networks can be


trained with any number of input
features. The neural network
training process can selectively
overweight the most salient features
A wide range of electrical network
and underweight the least significant
parameters can be used for describing
ones. However, the selection
the network state. Some of them can
procedure is time consuming for the
be the network load level expressed
training of the neural network, while
as a percentage of the maximal
after the training is complete, it is
network load, the number of lines, the
not always obvious which of the
cumulative rating of all lines, the
input nodes are of greater importance.
Further more, the least important
where M1 and M2 are the vectors of
input layer nodes may add noise to
feature means for class 1 and class
the neural network training process.
2, C1 -1 and C2 -1 are the inverse of
Bearing this in mind, a pre-selection
the covariance matrix for class 1 and
of the neural network input nodes is
class 2 respectively. For reasons of
of great use. This can be achieved
simplicity, a combination of bus and
through the use of statistical methods.
line losses only has been considered
The statistical methods that apply in
as a constituent element of a
the procedure of the selection of
contingency under study. The four
features are used in the classification
most salient features found were the
theory. The classification of a set of
aggregate reactive power generation,
training examples by two features in
the voltage stability index, the
two classes is considered to be better
aggregate MVA power flow and the
when the sub-populations look
real power margin index. This set of
different. the simplest test proposed is
selected features has been used for
the test of separating two classes
the training and testing of the neural
using just the means. A feature
networks subsequently built.
selection test from Means and
Variances is also proposed:
2.2.5 Quality index
The quality index is a qualitative
measure of the classification power
of the neural network. It is an index
A and B are of the same feature that has been calculated for all
measured for the classes 1 and 2 n1 simulations and applies on the idea
and n2 are the corresponding number that within the three classes of
of cases sig is a significance level. In contingency states, the major
[4] the following measure for filtering difference can be considered to
features separating two classes is also occur between two possible
proposed: categories of contingencies:
“innocent” and “non-innocent”
contingencies. In order to compute computational time and recourses
this “quality index” (QA) the requirements.
following formula has been used:
2.5.6 Result
Seven ANNs have been trained for
predicting the severity of
contingencies for the network of the
where ai,j is the I-th element of the j-
island of Crete, the testing set
th column of the confusion matrix
performance of these ANNs was in
Ai,j. The confusion matrix Ai,j is a
the range of 57% to 96%, this
matrix of frequencies. For each
performance did not seem to be
element of the matrix ai,j the i index
affected by the split between the
refers to predicted values, while the j
training and testing cases, as for
index refers to real values. The values
both a 70-30 and a 85-15 split the
range from one to three denoting the
results were simillar.
three possible contingency cases: one
in case of nonconvergence /
potentially serious contingency, two
in case of MVA and voltage
violations and three in case of an
innocent contingency.
This technique is involves a
tedious training phase, where a set of
neural networks is created,
while sensitivity analysis in terms of
corresponding to a given set of
the ANN architecture demonstrated
possible contingencies. The resulting
that the number of hidden nodes
set of ANNs demonstrate satisfactory
seem to have a serious effect on the
predictive power in classifying the
performance of the network,
contingencies correctly at run time.
suggesting use of more complex
The run time performance of the
ANNs.
system is very good in terms of
The promising results of this study Applications,” Proceedings of the
suggest application of similar IEEE, October 1990.
techniques in other areas of security D.C. Park, et al., “Electric Load
assessment of power systems and Forecasting Using an artificial
other industrial processes. Neural Network,” IEEE
Transactions on Power Systems,
4. BIBILIOGRAPHY:
Volume 6, Number 2,May 1991,
pages 442-449.
Neural networks

5.Load Forecasting by ANN-IEEE


1.Patrick K. Simpson, Artificial
computer applications in power
Neural Systems, Pergamon Press,
systems. Duane D. Highley" and
Elmsford, N. Y., 1990.
Theodore J. Hilmes "

2.”Special Issue on Neural Networks


Contingency analysis:
I: Theory and Modeling.”
Proceedings of the IEEE, September
6.Mitchell T. M., Machine Learning,
1990.
McGraw-Hill
Series in Computer Science, 1997,
Load forecasting
p.81
7 Grainger J. J., W D. Stevenson,
3.Jacques de Villiers and Etienne
Jr., Power System
Barnard, ”Backpropagation Neural
Analysis, McGraw-Hill, 1994, chap.
Nets with One and Two Hidden
9
Layers,” IEEE Transactions on
8 Wehenkel L.A., Automatic
Neural Netcuorks, Volume 4, January
Learning Techniques in
1993, pages 13G144.
Power Systems, Kluwer Academic
Publ., 1998, (p.
4.“Special Issue on Neural Networks
210)
11: Analysis, Techniques, and
3. Keywords:

Artificial neural networks, contingency analysis, load forecasting, applications


of ANN in power system, Artificial intelligence training and testing.

Document By
SANTOSH BHARADWAJ REDDY
Email: help@matlabcodes.com
Engineeringpapers.blogspot.com
More Papers and Presentations available on above site

You might also like