You are on page 1of 22

1

ASSIGNMENT ON CONTROL SYSTEMS


State Space Analysis of Control Systems

DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING INDIAN SCHOOL OF MINES, DHANBAD DHANBAD-826004

SUBMITTED TO DR. S. K. RAGHUWANSHI

SUBMITTED BY: Madhuri Suthar (ADM. NO: 2010JE1034) B-TECH 3rdYEAR, ECE DEPT., ISM DHANBAD

ACKNOWLEDGEMENT: It gives me immense pleasure in presenting my term paper. I would like to take this opportunity to express my deepest gratitude to the people, who have contributed their valuable time for helping me to successfully complete this training. In recent years, the concept of automatic control system has achieved a very important position in advancement of modern science. Classical approaches to modern approach for Automatic control systems have played a very important role in advancement and improvement of engineering skills. With great pleasure and acknowledgement I extend my deep gratitude to Dr. Sanjeev Kumar Raghuwanshi for providing me the in-depth knowledge about the subject on Control systems. It is my profound to express my deep sense of gratitude towards Mr. Santosh Kumar for his precious guidance, constructive encouragement and support. I would also like to thank my college who has directly or indirectly helped me for providing this opportunity to nurture my educational skills. The facilities provided by the library section in data collection have played a pivotal role in completion of the project. It is my obligation to acknowledge this help. My thanks are also due to the computer section and other support provided by the administrative section. At last, by concluding I would like to acknowledge that it would not have been possible for me to complete the paper without the above mentioned cooperative assistance.

Madhuri Suthar

CONTENTS

INTRODUCTION STATE SPACE MODEL CONTROLLABILITY AND OBSERVABILITY

-4

-5-10

-11,12

METHODS OF STATE SPACE EQUATION FROM TRANSFER FUNCTION OF A CONTROL SYSTEM -13,15

STATE SPACE REPRESENTATION FROM TRANSFER FUNCTION OF A ELECTRICAL NETWORK -16,17

STATE SPACE REPRESENTATION FROM TRANSFER FUNCTION OF A FEEDBACK CONTROL SYSTEM -18,19

STATE SPACE REPRESENTATION FROM TRANSFER FUNCTION OF A FEEDBACK CONTROL SYSTEM NON-LINEAR SYSTEMS REFERENCES -20

-21

-22

INTRODUCTION

Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs. The external input of a system is called the reference. When one or more output variables of a system need to follow a certain reference over time, a controller manipulates the inputs to a system to obtain the desired effect on the output of the system. The usual objective of a control theory is to calculate solutions for the proper corrective action from the controller that result in system stability, that is, the system will hold the set point and not oscillate around it. The inputs and outputs of a continuous control system are generally related by differential equations. If these are linear with constant coefficients, a transfer function relating the input and output can be obtained by taking their Laplace transform. If the differential equations are nonlinear and have a known solution, it may be possible to linearism the nonlinear differential equations at that solution. If the resulting linear differential equations have constant coefficients one can take their Laplace transform to obtain a transfer function. The transfer function is also known as the system function or network function. The transfer function is a mathematical representation, in terms of spatial or temporal frequency, of the relation between the input and output of a linear time-invariant solution of the nonlinear differential equations describing the system. Why control? Control is a key enabling technology underpinning: enhance product quality waste minimization environmental protection greater throughput for a given installed capacity greater yield deferring costly plant upgrades higher safety margins

The "control design" process involves. Plant study and modelling Determination of sensors and actuators (measured and controlled outputs, control inputs). Performance specifications Control design (many methods) Simulation tests Implementation, tests and validation.

State Space Representation The classical control theory and methods (such as root locus) that we have been using are based on a simple input-output description of the plant, usually expressed as a transfer function. These methods do not use any knowledge of the interior structure of the plant, and limit us to single-input single-output (SISO) systems, and as we have seen allows only limited control of the closed-loop behavior when feedback control is used. Modern control theory solves many of the limitations by using a much richer description of the plant dynamics. A state space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs and states, the variables are expressed as vectors. Additionally, if the dynamical system is linear and time invariant, the differential and algebraic equations may be written in matrix form. The state space representation (also known as the "timedomain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a vector within that space. Why state space equations? Dynamical systems where physical equations can be derived: electrical engineering, mechanical engineering, aerospace engineering, microsystems, process plants. include physical parameters: easy to use when parameters are changed for design State variables have physical meaning. Allow for including non-linearity (state constraints ) Easy to extend to Multi-Input Multi-Output (MIMO) systems Advanced control design methods are based on state space equations (reliable numerical optimisation tools). The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system is usually equal to the order of the system's defining differential equation. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors. The state variables defined must be linearly independent; no state variable can be written as a linear combination of the other state variables or the system will not be able to be solved.

State The state of a dynamical system is a minimal set of variables x1(t), x2(t), x3(t) ...... xn(t) such that the knowledge of these variables at t =t0 (initial condition), together with the knowledge of inputs u1(t), u2(t), u3(t)...... um(t) for t t0, completely determines the dynamic behavior of the system for t > t0. This definition asserts that the dynamic behavior of a state-determined system is completely characterized by the response of the set of n variables xi(t), where the number n is defined to be the order of the system.

State-Variables The variables x1(t), x2(t), x3(t) ...... xn(t) such that the knowledge of these variables at t = t0 (initial condition), together with the knowledge of inputs u1(t), u2(t), u3(t)......um(t) for t t0, completely determines the behavior of the system for t t0; are called statevariables. In other words, the variables that determine the state of a dynamical system, are called state-variables. Large classes of engineering, biological, social and economic systems may be represented by state-determined system models. System models constructed with the pure and ideal (linear) one-port elements (such as mass, spring and damper elements) are state-determined system models. For such systems the number of state variables, n, is equal to the number of independent energy storage elements in the system. The values of the state variables at any time t specify the energy of each energy storage element within the system and therefore the total system energy and the time derivatives of the state variables determine the rate of change of the system energy. Furthermore, the values of the system state variables at any time t provide sufficient information to determine the values of all other variables in the system at that time. There is no unique set of state variables that describe any given system; many different sets of variables may be selected to yield a complete system description. However, for a given system the order n is unique, and is independent of the particular set of state variables chosen. State variable descriptions of systems may be formulated in terms of physical and measurable variables, or in terms of variables that are not directly measurable. It is possible to mathematically transform one set of state variables to another; the important point is that any set of state variables must provide a complete description of the system. In this note we concentrate on a particular set of state variables that are based on energy storage variables in physical systems.

State space models of continuous-time linear systems. The state space model of a continuous-time dynamic system can be derived either from the system model given in the time domain by a differential equation or from its transfer function representation.

The State Space Model and Differential Equations


Consider a general -order model of a dynamic system represented by an differential equation ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) -order

At this point we assume that all initial conditions for the above differential equation, i.e. are ( ) ( ) Equal to zero. In order to derive a systematic procedure that transforms a differential equation of order to a state space form representing a system of first-order differential equations, we first start with a simplified version, namely we study the case when no derivatives with respect to the input are present ( ) ( ) ( ) ( ) ( )
( )

Introduce the following change of variables ( ) ( ) ( ) ( ) ( ) ( )

( ) which after taking derivatives leads to ( ) ( ) ( )

( )

( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( )

The state space representation of above equation is then given by [ ] ( ) ( ) ( ) [ ( ) ] [ ( ) ] [ ]

( ) ( ) With the corresponding output equation obtained as ( ) [ ] [ ( ) ( ) ]

The above two equation define state space form which is known in the literature as the phase variable canonical form. In order to extend this technique to the general case, which includes derivatives with respect to the input, we form an auxiliary differential equation having the form as ( ) ( ) ( ) ( ) ( )

For which the change of variables is applicable ( ) ( ) ( ) ( ) ( ) ( )

( )

( )

And then apply the superposition principle. Since ( )is the response of above equation, then by the superposition property the response is given by ( ) ( ) ( ) ( ) ( )

This produce the state space equations in the form already shown above. The output equation can be obtained by eliminating ( ) i.e. ( )

( )

()

()

()

This leads to the output equation ( ) [( )( ) ( )] [ ( ) ( ) ( ) It is interesting to point out that for which is almost always the case, the output equation also has an easy-to-remember form given by ( ) ( ) ( ) [ ][ ] ( ) ] ( )

Thus, in summary, for a given dynamic system modeled by differential equation, one is able to write immediately its state space form, just by identifying coefficients

And, and using them to form the corresponding entries in matrices. The most general state-space representation of a linear system with and state variables is written in the following form: ( ) ( ) Where: ( ) is called the "state vector", ( )is called the "output vector", ( ) is the "state matrix", ( ) is the "input matrix", ( ) is the "output matrix", ( ) ( ) ; ; ( ) , , , ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) inputs, outputs

( ) is called the "input (or control) vector", [ ( )] [ ( )] [ ( )]

( ) is the "feedthrough (or feedforward) matrix" (in cases where the system model [ ( )] does not have a direct feedthrough, is the zero matrix), ( ) , . ( ) ( )

In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the common LTI case, matrices will be time invariant. The time variable . can be continuous (e.g. ) or discrete (e.g. ). In the latter case, the time variable . is usually used instead of .

10

State Equation Based Modeling Procedure The complete system model for a linear time-invariant system consists of (i) a set of n state equations, defined in terms of the matrices A and B, and (ii) a set of output equations that relate any output variables of interest to the state variables and inputs, and expressed in terms of the C and D matrices. The task of modeling the system is to derive the elements of the matrices, and to write the system model in the form: x = Ax + Bu y = Cx + Du The matrices A and B are properties of the system and are determined by the system structure and elements. The output equation matrices C and D are determined by the particular choice of output variables. The overall modeling procedure developed in this chapter is based on the following steps: 1. Determination of the system order n and selection of a set of state variables from the linear graph system representation. 2. Generation of a set of state equations and the system A and B matrices using a well defined methodology. This step is also based on the linear graph system description. 3. Determination of a suitable set of output equations and derivation of the appropriate C and D matrices. Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions taken, the state-space model representation can assume the following forms:

System type Continuous time-invariant

State-space model

( ) y( ) Continuous time-variant ( ) ( )

( ) ( )

( ) ( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

Explicit discrete time-invariant ( ( ) Explicit discrete time-variant ( ( ) ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ) ( ) ( ) ( ) ( )

11

Laplace domain of continuous time-invariant

( ) ( )

( ) ( )

( ) ( )

Z-domain of discrete time-invariant

( ) ( )

( ) ( )

( ) ( )

Block Diagram Representation of Linear Systems Described by State Equations


The matrix-based state equations express the derivatives of the state-variables explicitly in terms of the states themselves and the inputs. In this form, the state vector is expressed as the direct result of vector integration. The block diagram representation is shown in Fig. below. This general block diagram shows the matrix operations from input to output in terms of the A, B, C, D matrices, but does not show the path of individual variables. In state-determined systems, the state variables may always be taken as the outputs of integrator blocks. A system of order n has n integrators in its block diagram. The derivatives of the state variables are the inputs to the integrator blocks, and each state equation expresses a derivative as a sum of weighted state variables and inputs. A detailed block diagram representing a system of order n may be constructed directly from the state and output equations as follows: Step 1: Draw n integrator (S1) blocks, and assign a state variable to the output of each block. Step 2: At the input to each block (which represents the derivative of its state variable) draw a summing element. Step 3: Use the state equations to connect the state variables and inputs to the summing elements through scaling operator blocks. Step 4: Expand the output equations and sum the state variables and inputs through a set of scaling operators to form the components of the output

12

Example: Continuous-time LTI case


Stability and natural response characteristics of a continuous-time LTI system (i.e., linear with matrices that are constant with respect to time) can be studied from the eigenvalues of the matrix A. The stability of a time-invariant state-space model can be determined by looking at the system's transfer function in factored form. It will then look something like this:

( )

( ( )(

)( )(

)( )(

) )

The denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of, ( ) | |.

The roots of this polynomial (the eigenvalues) are the system transfer function's poles (i.e., the singularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability. The zeros found in the numerator of can similarly be used to determine whether the system is minimum phase. The system may still be inputoutput stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are removable). Controllability and Observability Controllability and Observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed Stabilizable. Observability instead is related to the possibility of "observing", through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable. From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis. Solutions to problems of uncontrollable or unobservable system include adding actuators and sensors

13

Controllability State controllability condition implies that it is possible by admissible inputs to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable if and only if

Rank [

Where rank is the number of linearly independent rows in a matrix. Observability Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The Observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, Observability provides that knowing an output trajectory provides enough information to predict the initial state of the system). A continuous time-invariant linear state-space model is observable if and only if

Rank [

State Space Representation from Transfer function of a Control System The "transfer function" of a continuous time-invariant linear state-space model can be derived in the following way: First, taking the Laplace transform of ( ) Yields. ( ) Next, we simplify for X(s) giving ( ) ( ) ( ) ( ) ( ) ( ) and thus ( ) ( ) ( ) ( )

Substituting for X(s) in the output equation ( ) Giving ( ) (( ) ( )) ( ) ( ) ( )

14

Because the transfer function G(s) is defined as the ratio of the output to the input of a system, we take ( ) ( ) ( )

and substitute the previous expression for Y(s) with respect to U(s), giving ( )

Clearly G(s) must have by dimensionality, and thus has a total of elements. So for every input there are transfer functions with one for each output. This is why the statespace representation can easily be the preferred choice for multiple-input, multiple-output (MIMO) systems. Canonical realizations Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system): Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form: ( )

The coefficients can now be inserted directly into the state-space model by the following approach:
( ) [ ( ) [ ] ( ) ] ( ) [ ] ( )

This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state). The transfer function coefficients can also be used to construct another type of canonical form ( ) [ ( ) [ ] ( ) [ ] ( )

] ( )

15

This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exists from a chain of integrators, every state has an effect on the output). Proper transfer functions Transfer functions which are only proper (and not strictly proper) can also be realized quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant. ( ) ( ) ( )

The strictly proper transfer function can then be transformed into a canonical state space realization using techniques shown above. The state space realization of the constant is trivially. Together we then get a state space realization with matrices A,B and C determined by the strictly proper part, and matrix D determined by the constant. Here is an example to clear things up a bit: ( )

which yields the following controllable realization ( ) * + ( ) * + ( )

( )

] ( )

[ ] ( ) ( ) constant in

Notice how the output also depends directly on the input. This is due to the the transfer function.

16

State Space Representation from Transfer function of a Electrical network


The state equations, written in the form of Eq. (16), are a set of n simultaneous operational expressions. The common methods of solving linear algebraic equations, for example Gaussian elimination, Cramers rule, the matrix inverse, elimination and substitution, may be directly applied to linear operational equations such as Eq. (16). For low-order single-input single-output systems the transformation to a classical formulation may be performed in the following steps: 1. Take the Laplace transform of the state equations. 2. Reorganize each state equation so that all terms in the state variables are on the left-hand side. 3. Treat the state equations as a set of simultaneous algebraic equations and solve for those state variables required to generate the output variable. 4. Substitute for the state variables in the output equation. 5. Write the output equation in operational form and identify the transfer function. 6. Use the transfer function to write a single differential equation between the output variable and the system input. This method is illustrated in the following example. Example Use the Laplace transform method to derive a single differential equation for the capacitor voltage in the series R-L-C electric circuit shown in figure.

Figure : A series RLC circuit. Solution: The linear graph method of state equation generation selects the capacitor voltage ( ) and the inductor current ( ) as state variables, and generates the following pair of state equations: * + [ ][ ] * + The required output equation is: ( ) [ ][ ] [ ]

Step 1: In Laplace transform form the state equations are:

17

( ) ( ) Step 2: Reorganize the state equations: ( ) ( ) (

( ) ( )

( ) ( )

( ) ( )

( ) ) ( )

( ) ( )

Step 3: In this case we have two simultaneous operational equations in the state variables and . The output equation requires only . If Eq. ( ) Is multiplied by [s + R/L], and Eq. ( ) ( ) ( ) ( ) ( ) ( )

is multiplied by 1/C, and the equations added, ( ) is eliminated: [ ( Step 4: The output equation is [ ( [ and write in quotient form: ( ) * (
( ) ( )

] ( )

( )

Operate on both sides of Eq. ) ] ( ) ( ) ] ( )

( )

Step 5: The transfer function

( ) ( )

is:

* ( ( ) to

Step 6: The differential equation relating

( ) is:

( )

( )

( )

( )

18

State Space Representation from Transfer function of a Feedback Control System

Typical state space model with feedback

A common method for feedback is to multiply the output by a matrix K and setting this as the input to the system: ( ) ( ) Since the values of K are unrestricted the values can easily be negated for negative feedback. The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results becomes ( ) ( ) Solving the output equation for ( ) and ( ) ( ) ( ) ( )

Substituting in the state equation results in ( ) ( ( ) ( ( ) ) ( ) ) ) ) ( )

The advantage of this is that the Eigen values of A can be controlled by ( setting K appropriately through Eigen decomposition of (

This assumes that the closed-loop system is controllable or that the unstable Eigen values of A can be made stable through appropriate choice of K.

19

Example For a strictly proper system D equals zero. Another fairly common situation is when all states are outputs, i.e. y = x, which yields C = I, the Identity matrix. This would then result in the simpler equations ( ) ( ( ) ) ( ) ( )

This reduces the necessary Eigen decomposition to just Feedback with set point (reference) input In addition to feedback, an input, ( ), can be added such that . ( ) ( ) ( ) Becomes ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( )

Solving the output equation for ( ) and substituting in the state equation results in ( ) ( ( ) ( ( ) ) ) ( ) ( ) ( ( ( ) ) ( ) ) ( )

One fairly common simplification to this system is removing D, which reduces the equations to common simplification to this system is removing D, which reduces the equations to

( )

( ( )

) ( ) ( )

( )

20

STATE SPACE REPRESENTATION FROM TRANSFER FUNCTION OF A PHYSICAL EXAMPLE Moving object example A classical linear system is that of one-dimensional movement of an object. The Newton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring ( ) Where ( ) ( ) Is position; ( ) is velocity; ( ) is acceleration ( ) ( ) ( )

Is an applied force

Is the viscous friction coefficient Is the spring constant. Is the mass of the object. The state equation would then become [ ( ) ] ( ) [ ( ) [ ][ ][ ( ) ] ( ) ( ) ] ( ) [ ] ( )

Where

( ) Represents the position of the object. ( ) ( )


the output

( ) Is the velocity of the object. ( ) Is the acceleration of the object. ( ) is the position of the object

The Controllability test is the [ ] [[ ] and [ * + [ [ ] [ ] ] ] * + [


.

[ ]]

Which has full rank for all


The Observability

test is then

21

This also has full rank. Therefore, this system is both controllable and observable.

Nonlinear systems
The more general form of a state space model can be written as t ( ) ( ) ( ( ( ) ( ) ( )) ( )) wo functions.

The first is the state equation and the latter is the output equation. If the function ( ) is a linear combination of states and inputs then the equations can be written in matrix notation like above. The ( ) argument to the functions can be dropped if the system is unforced (i.e., it has no inputs). Pendulum example A classic nonlinear system is a simple unforced pendulum ( ) where

( )

( )

( ) is the angle of the pendulum with respect to the direction of gravity is the mass of the pendulum (pendulum rod's mass is assumed to be zero) is the gravitational acceleration is coefficient of friction at the pivot point is the radius of the pendulum (to the center of gravity of the mass )

The state equations are then ( ) ( ) where


( ) ( ) ( )

( ) ( ) ( )

( ) is the angle of the pendulum ( ) is the rotational velocity of the pendulum ( ) is the rotational acceleration of the pendulum

Instead, the state equation can be written in the general form ( ) ) ( ) ( ) ( ) ( )

( )

( ))

The equilibrium/stationary points of a system are when ( ) points of a pendulum are those that satisfy ( ) ( ) for integers n.

and so the equilibrium

22

REFERENCES Books: Linear Control Systems :By- B. S. Manke Control Engineering :By M. N. Bandyopadhyay Modern control system: By Dr. V. Sakarnarayanan Websites:

www.wikipedia.com www.ece.rutgers.edu http://reference.wolfram.com http://www.samson.de

You might also like