You are on page 1of 134

Lec 1: Introduction to Process Control

Learning objectives: To introduce the student to process control, the need for it and its
applications.
Learning outcomes: On completion of this topic, students should be able to describe a
simple control system, types of controllers and parameters in a control system.
Process Control is the study of automatic control principles applied to chemical
processes. It applies principles of mathematics and engineering science to the regulation
of the dynamic operation of process systems. To be successful, you need strong applied
mathematics skills and process understanding (most of which is just common sense).
The skills and tasks you've been exploring in the first three years of ChE classes are
predominantly analytical. They are used for diagnosis and understanding of processes
and problems. This year, your design classes will work on synthesis skills for devising
new processes. In Process Control, we will use some analytical tools (old and new) and
synthetic skills to understand the dynamic (time dependent) behavior of processes and
ways to regulate plant operation.
Since the primary function of control systems is to compensate for dynamic changes in
process systems, we need to understand the dynamics of processes -- how their behavior
changes with time -- if we are to develop workable solutions. We address this need
through dynamic modeling of the chemical processes. Mathematically, this means we
will be dealing with differential equations.
Note on course organization
Control courses can be difficult for an instructor to organize. There are often multiple
ways of approaching concepts, each with its own "dialect" of terminology and equation.
Topics often wrap back around, so books and instructors sometimes have a tendency to
use terms and ideas before they are fully defined. So let me know if this is happening in
this course.
Chemical Process Control
The same basic control methods, principles, and tools apply whether the "process" is
chemical, electrical, or mechanical. Control theory has been developed by ChEs, EEs,
and MEs, so the terminology reflects concepts from all three disciplines (as well as
mathematical systems and optimization theory).
Differences in the application are what separate ChE control from other practitioners.
Chemical process systems are distinguished by:
- longer time constants (minutes for a HX, hours for many columns)
- long transportation lags or "dead time" (minutes)
- nonlinearities (reaction kinetics)
- distributed parameters (coupled material and energy balances)
Why is control necessary?
Process plants do not operate at steady state, no matter what you have been assumed in
other classes.
Consider what might happen to a distillation column operating in a plant:
- market price of feedstocks and products vary, so sources and suppliers change;
the 50% feed you are using today may be 45% tomorrow.
- ambient temperature changes continuously; this changes cooling water
temperatures, which change condenser duties, which changes in column operating
pressure and overhead flows.
- steam system supply pressure varies as sources and users are switched on and off;
this causes the reboiler temperature and load to vary, changing the flows at the
base of the column.
At the buyer side of the view is a desire to get the same material every time, and often
requires statistical proof of minimal variance. Since plant behavior is variable, but your
customers won't accept variation, a good control of the system is needed, enabling the
plant to running at predictable, regulated conditions.












Putting all this together, the two main functions of control systems are:
1. setpoint tracking -- the ability to shift from one desired operating point to another
(like you driving your car)
2. disturbance rejection -- the ability to maintain an operating point despite
fluctuating conditions and external forces (like your thermostat)
Disturbances can never be completely eliminated; however, a good control system can
greatly attenuate their consequences and reduce the variability of process parameters. If
we can reduce variability, we need smaller margins of error and contingency allowances,
and so can operate much closer to optimum conditions, reducing waste and saving
money.
Safety Systems
No feedback control loop, no matter how well-designed and tuned, can guarantee safe
operation. Consequently, a regulatory process control system cannot be trusted as the
primary safety system. Almost all chemical plants have a second, parallel control system
to handle safety alarms and shutdown. While we will always consider the safety aspects
of control systems, we will not study the design of these alarm/shutdown systems.
Control Objectives
The objectives of a control system fit into a hierarchy -- that is, some objectives are given
priority over others. One way of ordering the hierarchy is by the purpose of the control
system components:
1. Safety
2. Environmental Protection
3. Equipment Protection
4. Smooth Plant Operation
5. Product Quality
6. Profit Optimization
7. Monitoring and Diagnosis
According to this structure, control loops responsible for safety-related tasks will always
have priority over all other tasks; loops for product quality will have priority over loops
whose primary task is optimization; and so forth. Most of the techniques we study in this
course will apply directly to the operating and quality objectives.
Be aware that a loop can serve more than one purpose and that its place in the hierarchy
is not always cut-and-dried.
One of the themes of our study this semester will be the tradeoffs between plant design
and plant operation. Control systems are part of the day to day operation of a plant. This
suggests another way of ordering the hierarchy of control objectives: "achievability".
After all, until your plant is operating, controls aren't needed at all. This sort of hierarchy
tends to group loops by function as much as it does by objective:
1. Production Rate & Inventory Controls
2. Safety/Environmental Controls
3. Equipment and Operating Constraint Controls
4. Product Quality Controls
5. Optimization
Feedback Control Loops
Imagine taking a shower from a two-knob faucet. You want to set the rate of water flow
and its temperature so that the shower is effective and comfortable. You can control the
hot water flow and the cold water flow separately. Throughout your shower, there may be
"disturbances" in the form of changes in water supply temperature and pressure.
So how do you regulate your shower?
1. You test the flow and temperature
2. Decide if it is OK
3. If not OK, decide what adjustment to make
4. Adjust the knobs
5. Repeat from the first step

This is the structure of a basic feedback control loop. Information from the process (water
flow and temperature) is "fed back" by a measurement device (you) through a controller
(you again) to a control element (the knobs) to change the process input.
All control systems have these same basic components:
- measurements of variables affecting the system
- a specified desired value or range of values for the controlled variable (the
setpoint).
- a control calculation or algorithm
- a way of adjusting the system to reflect the results of the control calculation (the
control element).
The system can be represented by a block diagram where lines are used to represent
variables or signals and boxes for actions.

Block diagrams represent the logic and mathematical model of a control loop. We might
also choose to represent the equipment used to construct the loop in a piping and
instrument diagram or P&ID. Complete P&IDs show every piece of equipment, wiring,
etc., that need to be installed, and so can be very complex. For our purposes, we just want
to see the main pieces, so a simplified drawing is used.

The loop shown consists of:
- a sensor or transducer element which measures some output process variable.
- a transmitter which converts the sensor output to a signal suitable for transmission
through the plant
- a controller which compares the transmitted measurement to a setpoint value and
then determines what control action is required.
- a final control element (usually a control valve) that transforms the controller
output signal into a change in a manipulated variable.
These are the instruments that are used to control the process.
Table 1.1 of your textbook has a handy list of commonly used symbols for these
diagrams.
Control System Design
When an engineer sets out to design a control system, the steps are:
1. determine control objectives
2. identify measurable variables, available manipulators
3. pair variables (choose controller structure)
4. select controller algorithms
5. tune controller (adjust sensitivity)
In this class, we will deal mainly with the tools and concepts needed to model the
process, examine the stability, and select and tune controllers.
Terminology
Steady State: A steady state system does not change with time. Mathematically, this
means the time derivatives in the balance equations (the accumulation terms) are zero.
Often, systems will reach steady state if given a long time to settle -- usually, real systems
don't get the time. This leads to another mathematical approximation -- steady state is the
behavior of the system as time approaches infinity. Some people use the words static or
stationary as synonyms for steady state.
Dynamic (or transient) systems are time dependent. All real systems are dynamic; this
makes process control necessary. Dynamic systems must be modeled using differential
equations, unlike steady state systems where algebraic systems will suffice.
Inputs and Outputs are not necessarily material flows. An input is a variable that causes
an output to change. Both inputs and outputs may be measurable or they may not.
Disturbances are inputs that cannot be adjusted, and often they are not measurable.
Error is the difference between the measured behavior of a process output and its desired
behavior or setpoint. Never forget that the measured values of the outputs are only
representations of the real values, and may be limited in accuracy.
Feedback Control: information from an output of a system is used to adjust a manipulator
to change an input to the system to try and compensate for disturbances after they have
changed the system.
Feedforward Control: information from measured disturbances is used to adjust a
manipulator to try and compensate for disturbances as they occur. Feedforward allows for
the possibility of "perfect control", but only if all disturbances are measured and the
adjustments are fully understood. This means you must have a complete and very
accurate model of the process -- not an easy achievement. Feedback control adjusts for all
disturbances and does not require an exact process model.
Negative feedback reduces the difference between the actual and desired values, so it is
beneficial. Positive feedback increases the difference, so it is undesired.
When a system is operating without control, we say it is operating Open Loop. A Closed
Loop system has controllers on-line.
One of the most important things we will be watching is the stability of the system. The
error of an unstable system becomes larger and larger (unbounded) with time, often
leading to undesirable consequences.

Control Loop Hardware
A control loop is built from mechanical and electrical devices. These usually include
- a sensor
- a transmitter
- a controller
- an actuator, and
- a final control element
Riggs (2001) lumps some of these into subsystems. He calls the sensor and transmitter
the "sensor system" and the actuator and control element the "actuator system".
The controller will usually be located in a control room; typically, it exists as software
within a Distributed Control System (DCS) computer. The other parts are physical
equipment built into or adjacent to the process equipment.
Information is passed around the control loop in the form of signals. These may be
analog or digital, electrical or pneumatic. Converters or transducers transform signals
from one type to another.
Lec 2:Process Diagrams
Block diagrams
At an early stage or to provide an overview of a complex process or plant, a drawing is
made with rectangular blocks to represent individual processes or groups of operations,
together with quantities and other pertinent properties of key streams between the blocks
and into and from the process as a whole. Such block flowsheets are made at the
beginning of a process design for orientation purposes or later as a summary of the
material balance of the process.



Process flow diagrams (and to some extent PID)
Process flowsheets embody the material and energy balances between and the sizing of
the major equipment of the plant. They
include all vessels such as reactors, separators, and drums; special processing equipment,
heat exchangers, pumps, and so on. Numerical data include flow quantities,
compositions, pressures, temperatures, and so on. Inclusion of major instrumentation that
is essential to process control and to complete understanding of the flowsheet without
reference to other information is required particularly during the early stages of a job,
since the process flowsheet is drawn first and is for some time the only diagram
representing the process. As the design develops and a mechanical Boarsheet gets
underway, instrumentation may be taken off the process diagram to reduce the clutter. A
checklist of the information that usually is included on a process flowsheet is given
together with an example of a PFD.

Essential components
Process lines, but including only those
bypasses essential to an understanding of
the process
All process equipment. Spares are indicated
by letter symbols or notes
Major instrumentation essential to process
control and to understanding of the
diagram
Valves essential to an understanding of the
diagram
Design basis, including stream factor

Essential information
Stream Number
Temperature (C)
Pressure (bar)
Vapor Fraction
Total Mass Flow Rate (kg/h)
Total Mole Flow Rate (kmol/h)
Individual Component Flow Rates
(kmol/h)




Figure: Example of a Process flow diagram





Figures: Examples of PID
Drawing of diagrams
Flowsheets are intended to represent and explain processes. To make them easy to
understand, they are constructed with a consistent set of symbols for equipment, piping,
and operating conditions. At present there is no generally accepted industrywide body of
drafting standards, although every large engineering office does have its internal
standards.
Equipment symbols are a compromise between a schematic representation of the
equipment and simplicity and ease of drawing.
Common symbols:

Since a symbol does not usually speak entirely for itself but also carries a name and a
letter-number identification, the flowsheet can be made clear even with the roughest of
equipment symbols. The letter-number designation consists of a letter or combination to
designate the class of the equipment and a number to distinguish it from others of the
same class, as two heat exchangers by E-112 and E-215. Example of letters used:


Chapter 3: Mathematical models

Learning objectives: To introduce the student to the use of balance equations in process
control and how it renders transfer functions.
Learning outcomes: Students should be able model simple systems and express them in
the form of block diagram.

Introduction
Very often engineers use "models" of a process to aid understanding. A model can be a
description, a picture or physical model, or a mathematical or statistical construct that
emulates the behavior of the real, physical system, although often in an idealized way.

Or as Adage put it: All models are wrong, but some are useful

The degree of complexity of a model is linked to decisions made in the modeling process.
Sometimes it is desirable to start with a fundamental or first principles model -- modeling
equations are developed starting from the material and energy balances, chemical and
physical laws.

The steps in developing a fundamental model are:
- Preparation
o Decide what kind of model is needed. What scale? How detailed? How
accurate? What features cannot be neglected?
o Define and sketch the system.
o Select variables
- Model Development
o Write balance equations (mass, component, energy) to describe the
system.
o Write (descriptive) constitutive equations (transport, equilibrium, kinetic)
needed to implement the balance equations.
o Check for consistency of units, independence of equations, degrees of
freedom
- Solution (Simulation)
o Solve the equations (analytically or numerically)
o Check and verify the solution

This type of model will emerge as a system of differential balance equations (ordinary or
partial) accompanied by a set of algebraic constitutive equations. Depending on the
intended use, the model can be adapted in several ways
- made steady-state (time derivatives approach zero)
- linearized
o differential equation form
o state-space form
o transformed to transfer function form

When fundamental models are not possible.
In other cases, the fundamental behavior of a process is poorly understood or
prohibitively complex to model based on first principles. In these cases, models may be
developed from experimental dynamic data. Developing a model from experimental data
is often called process identification. Identification techniques can be very simple
(process reaction curve analysis of step inputs) or complex (ARIMA modeling of PRBS
inputs). Essentially, these approaches "curve fit" the data to produce what are sometimes
called "Input/Output Models" or "Black Box" models.

The choice of a model type depends on the scale of the problem. You probably don't want
to model molecular dynamics unless they make a difference!

Basic Modeling Equations
The dynamic models used for process modeling and control can be mathematically
represented by a set of balance equations (conservation equations). These may be
supplemented by one or more constitutive equations that further define terms in the
balance equations.
Every dynamic model will include at least one balance equation. The balance equations
will have the same general form you've been using for several years now:
Accumulation = Transport Generation
= [In Out] [Production Consumption]

The accumulation term will supply the time derivative and produce a differential
equation. If the model is distributed, the equation will also include position derivatives.
Transport terms will typically be initially written in terms of the transport fluxes.
Material Balance
A mass balance is needed whenever you are interested in the "holdup" of a system.
Holdup is typically measured using level for liquid systems or pressure for gas/vapor
systems. You should expect your mass balance equation to have either or both of these
variables inside the accumulation term. Unless you are dealing with a nuclear reaction,
the mass balance will not have a generation term.
A mass balance may be written over each system or subsystem that you can define within
your process.
Constitutive equations may be needed to define system properties such as density in
terms of composition, temperature, pressure, etc.
Exercise: Write the mass balance for a mixing tank, using the V, the volume of the tank,
q
i
and q
o
, the volume flowrate in and out respectively.
________________________________________
Exercise: Get together in groups and state common assumptions used in writing and
simplifying mass balances.
____________________
____________________
____________________

Component Balance
A component balance must be written whenever composition changes are to be
examined.
Almost all _________ or _________ problems will involve a component balance.
Compositions are usually expressed in terms of mole fractions.
You can write one balance for each component over each subsystem, but remember that
the sum of all component balances is the total material balance, so normally will use one
total mass balance and (N
component
-1) component balances.
Initially, transport terms in a component balance will be expressed in terms of the
transport fluxes (both molecular and convective). Constitutive equations defining the
fluxes will thus be needed. When reactors are modeled, generation terms will be required
and will be written in terms of reaction rate expressions. These will also need to be
defined by a constitutive equation.
The generation term is commonly expresses through the rate of reaction; r
A

kmoles/(m
3
s).
As we can see from the unit to determine the generation we have to multiply the rate of
reaction with the __________________.
Exercise: Write the component balances for a CSTR with an A-->B reaction:
___________________________________________
___________________________________________

Common assumptions include:
___________________________________
___________________________________
Energy Balance
You will need to write an energy balance whenever the __________ within your system
changes; which will almost always be inside the derivative. Reference temperatures for
enthalpies can complicate things, so be careful.
One energy balance can be written for each separable system or subsystem.
Energy transport fluxes and thermodynamic property relations will require constitutive
equations for fuller definition.
The enthalpy for a flow system contains the properties; flowrate, ________,_________
and ___________.
Exercise: Write the energy balance for a heated tank, which has a heater suppliant a heat
transfer q.

________________________________________________________________
Exercise: What simplifications would commonly be helpful?
___________________________
___________________________
___________________________
Constitutive Equations
All models will include one or more balance equations. Most will also use a set of
constitutive equations to better define specific terms in the balance equations. Common
constitutive relationships include:
- property relations and equations of state
- transport flux relations
- reaction rate expressions
- equilibrium expressions
- fluid flow relations
Property Relations / Equations of State
Physical and thermodynamic properties (density, heat capacity, enthalpy, etc.) vary with
temperature, pressure, and composition. These relationships usually must be incorporated
into dynamic models.
Enthalpy is typically expressed as a function of temperature ____________________
Equations of state are typically used to express vapor densities in terms of system
temperature and pressure. Often, the ideal gas equation is adequate.


Transport Flux Expressions
Transport flux expressions are usually used to quantify heat and mass transfer. When
transport is purely molecular, these are nothing more than statements of Fick's Law,
Fourier's Law, and Newton's Law of Viscosity. They are expressed as:
( )
( )
1 2
1 , 2 ,
T T hA Q
C C k N
A A c A
=
=

Reaction Rate Expressions
The reaction rate expressions used in dynamic modeling are typically based on the
principles of mass action. The Arrhenius expression must be incorporated directly when
rate constants depend on temperature; otherwise, the energy balance won't adequately
describe temperature changes.
n
A
RT
E
A
C e k r
A

=
0

or for constant temperature _____________________________

Equilibrium Expressions
Phase equilibrium expressions are often needed when modeling separation systems.
Raoult's law, ______________, equilibrium K-values,_____________, and relative
volatility,_____________, all are used. The choice is the modelers.
Chemical equilibrium expressions are needed less often. If they are needed, they are
usually incorporated as part of the reaction rate expression.
Fluid Flow Relations
Fluid flow relationships are typically used when it is necessary to relate pressure drop to
flow rate. These usually take the form of a momentum balance (equation of motion) or
mechanical energy balance. Momentum balances are typically required for gravity flow
problems (where the balance may reduce to Torricelli's Law).
For systems involving flow through a weir or across a valve, a reduced form of the
mechanical energy balance is often used. Rather than deriving these from the balance, it
is usually reasonable to select an appropriate eqaution, for instance






Though it is not uncommon to have to use the full Bernoullis law and the continuity
relation.
Example 3.1:

Flow rates in and out, no reaction Simple material balance
( ) ( )
( ) ( ) t p k t q
stics charecteri Valve
t h k t q
law s Bernoulli
A =
=

'
Material balance: dm/dt = m
in
-m
out

or in variables in the flowchart: d(V)/dt=q
in

in
- q
out

out


Assume constant density for all streams: d(V)/dt=q
in
- q
out

Bernoullis equation:
Atmospheric pressure in both ends; p
1
=p
2

Substitute: h=h
1
-h
2
The liquid surface is moving very slowly: v
1
=0
Hence, we get just two terms: _______________
with the relation between flow rate and velocity we get a description of q
out
: __________
And finally we got equation describing the system:

Example 3.2:

Component balance:
out in
qc qc
dt
dVc
=
There will take a certain amount of time for the flow to get from the tank to the point
where the fluid exits the pipe, which we have to take into account. So the concentration
of the outlet is the concentration in the tank a certain amount of time earlier.
From physics we know s=vtT
d
=l/v
We also know that: v=q/a
Hence: T
d
=la/q
And we finally get: ( ) q la t qc qc
dt
dc
V
in
/ =

From this we can see that the goal is to describe at least one of the variables on left hand
side in the variable that is present in the differentiation. So in example 1 described q
out
as
a function of h and c
out
is described as c.

2 2
2
2
2 2
2
1
1 1
v
gh p
v
gh p

+ + = + +
q
A
h
A
g a
h
dt
d 1 2
+ =
Exercise:

Lec 4: Laplace transform
Learning objectives: In this topic, Laplace transforms are introduced and its
use in process control and solving differential equations are discussed.
Learning outcomes: At the end of the lesson, students must be able to carry
out Laplace and inverse Laplace transforms using tables, use Laplace
transform to solve simple differential equations and change a control
equation into one utilizing deviation variables.
Introduction
A mathematical transform takes an expression in one mathematical "language" and
converts it to another. If done correctly, a transform doesn't change the meaning of the
expression, but may make it easier to interpret, as with a unit conversion. We usually
choose to use transforms to gain insight or simplify a problem.
The Laplace transform is commonly used in process modeling and control. The
transform is given by

The transform produces several changes in the equation:
Variable is t (time) Variable is s (dimensions of inverse time)
t is Real number s is Complex number
Solutions from "Time Domain" Solutions from "Laplace Domain"
Differential equation Algebraic equation
The last point is the biggest single reason the Laplace transform is valued -- it transforms
linear differential equations into algebraic equations, which many people find easier to
solve.
As an example, we'll apply the definition of the Laplace transform to the unit step
function. The step function is very important in control modeling. It is given by:
( )

<
>
=
a t
a t
a t
0
1
o
Most of the time, the "trigger" time (the value a) is set to zero, and the unit step function
defines what we typically think of as a constant (that exists from "the beginning of
time"!):
f(t)=K(t)
Applying the definition of Laplace transform to this function gives:
( ) ( ) { } ( ) { } ( ) ( )
s
K
s
K
s
e
K dt e K dt e t K t K L t f L s F
st
st st
= =

= = = = =
} }




1 0
0 0 0
o o
You can always use the definition to find the Laplace transform of a function, but that
usually is more trouble than it is worth. First of all, you can probably work the vast
majority of control problems with a set of about 4 transforms -- so you'll probably end up
memorizing those.
Properties of the Laplace Transform
The Laplace transform is a "linear operator", so we have that
L{f(t)+g(t)}=L{f(t)}+ L{g(t)}=F(s)+ G(s)
L{cf(t)}= cL{f(t)}= cF(s)
Derivatives
Derivatives also produce a very nice result:
( ) ( )
( )
( ) ( ) 0 0 f s sF
dt
t df
L or x s sX
dt
dx
L =
)
`

=
)
`


This is the key result that causes time domain differential equations to transform to
Laplace domain algebraic equations. In the Laplace domain, multiplication by s is
equivalent to differentiation in the time domain. Similarly, Laplace domain division by s
can be shown to be equivalent to integration in the time domain.

This can be extended to second order derivative:

( ) ( ) | | ( ) ( ) ( ) 0 ) 0 ( 0 ) 0 ( 0
2
2
2
x sx s X s x x s sX s x
dt
dx
sL
dt
dx
dt
d
L
dt
x d
L ' = ' = '
)
`

=
)
`

|
.
|

\
|
=
)
`


And for third order
( ) ( ) 0 ) 0 ( ) 0 (
2 3
3
2
x x s x s s X s
dt
x d
L ' ' ' =
)
`


And so on


Initial Value and Final Value Theorems
There are two very useful theorems involving Laplace transforms, which basically
enables you to find out where your system is heading (the new steady state for a
particular system) or where the system started from (the original steady state), without
out actually having to determine the function f(t) from the ODE.
Final Value Theorem

which means that the final value of f(t) is the same as initial value of sF(s).
Initial Value Theorem

which means that the initial value of f(t) is the same as final value of sF(s).

Exercise: You have a flow system as in Example 3.1, but we assume that q
out
can be
modeled as kh, which gives the following balance equation Adh/dt=q-kh. The values of A
and k can be taken to be 5m and 1 m/s respectively.
1. Carry out the Laplace transform of the balance equation to determine the function H(s).
2. For q=0.5 m
3
/s (which gives Q(s)=0.5/s) use the final value theorem to determine the
final steady state levels of the tank.
Solving ODEs with Laplace Transforms
The major reason people bother with Laplace transforms is that they can make it easier to
obtain analytical solutions of many linear ordinary differential equations. The procedure
can be described in diagram form as:

Example 4.1:
Consider the ODE and initial condition:
( ) 2 0 0 3 = = + x x
dt
dx

We start with transforming the equation
( )
( ) ( )
( ) 0 / 0 0
3 3 3
2 0
= =
= =
= =
|
.
|

\
|
s L
X x L x L
sX x sX
dt
dx
L

Then we rearrange the equation to find X.
( )
3
2
2 3 0 3 2
+
=
= + = +
s
X
X s X sX


Now, all that is necessary is to invert the Laplace transform to find the time domain
solution. Look in the table for a useful transform:
( )
a s
e L
at

=
1

and then use this to invert the solution:
( )
( )
t
e
s
L
s
L
s
L t x
3 1 1 1
2
3
1
2
3
1
2
3
2

=
)
`


=
)
`

+
=
)
`

+
=
Using this approach, many linear ODEs can be solved with a little algebra and a table of
Laplace transforms.
Partial Fractions Expansion
Not every solution is likely to be found in a table like;

But using the properties of the transform you see that you need to break problems up into
smaller, more familiar ones were the first step is to expand the equation:

Followed by partial fractioning

Any fraction with a polynomial denominator can be expressed as the sum of terms with
first order denominators and then Laplace transform every single term to obtain:

We can note that it the roots are real and r
i
>0 we get exponential growth of that term and
if r
i
<0 we get an exponential decrease. If we would have complex roots they would come
in complex conjugate pairs getting r
i,i+1
=i which would lead to a solution on the
form:

This highlights that the imaginary part of the root just affects oscillations and the real part
affects the exponential behavior of the system, giving exponential growth for a positive
real part and exponential decrease for a negative real part just as for the real case.
Example 4.1:
) ( ) (
0
1
1
0
1
s X
a s a s a
b s b s b
s Y
n
n
n
n
m
a m
m
m
+ + +
+ + +
=

| | ) ( _ _ ) (
2
2
1
1
s X of Terms
r s
A
r s
A
r s
A
s Y
n
n
+

+ +

=
)) ( _ _ ( ) (
2 1
2 1
t x of Terms e A e A e A t y
t r
n
t r t r
n
+ + + + =
+ + = + + = ) sin cos ( sin cos ) ( t C t B e t Ce t Be t y
t t t
e e e e

) (
) ( ) )( (
) (
2 1
0
1
s X
r s r s r s a
b s b s b
s Y
n n
m
a m
m
m

+ + +
=

( ) 1 1
1
2 1
+
+ =
+
=
s
c
s
c
s s
X
Once expanded, the terms look pretty simple to work with. All that is needed is to find
values for the constants c
1
and c
2
. The steps are:
1. multiply through by one of the denominator factors
2. set s=r
1
. All the constants but one will vanish.
3. repeat for the other factors
( )
1
1 0
0
1 0
1
0 . 2
1 1
1
1 1
1
. 1
1
2
1
2
1
2 1
=
+

+ =
+
=
+

+ =
+
=
|
|
.
|

\
|
+
+ =
+

c
c
c s
s
c s
c
s s
c
s
c
s s
s

(All but one of the constants vanishes each time, so it isn't really even necessary to write
those terms out.):
1/s=c
2
for s=-1 c
2
= -1
And now the inversion is easy:
( )
t
e
s
L
s
L
s s
L
s s
L t f

+ =
|
.
|

\
|
+
+
|
.
|

\
|
=
|
.
|

\
|
+
+ =
|
|
.
|

\
|
+
= 1
1
1 1
1
1 1
1
1
) (
1 1 1 1


Exercise 4.1:
The method becomes more complicated when roots are repeated. You must always have
one term, and one unknown constant, for each root. So for repeated roots you get:
( ) ( ) 1 1 1
1
3
2
2 1
2
+
+
+
+ =
+ s
c
s
c
s
c
s s

For c
1
and c
2
we got on as before, multiply by the denominator of the constant and insert
the root in place of s to get: c
1
=___________________ and c
2
=___________________.
However if you try to do the same for you will end up with infinities. What you will have
to do is to multiply by the highest order present of the root connected to the c
3
, in this
case (s+1) which gives:
( )
( ) 1
1 1
3 2
2
1
+ + +
+
= s c c
s
s c
s

and then differentiate with regard to s to get:
( ) ( )
3 2
2
1 1
2
1 1 2 1
c
s
s c
s
s c
s
+
+

+
=
After which we can insert the s=-1 and continue and get c
3
=___________________.
Which means we have got it properly fractioned and can carry out the inverse Laplace
transform to get f(t)=____________________________________.

Exercise 4.2:
What would be the procedure if there was a root of third order instead?

Two last comments:
- partial fractions expansion works even if the denominator factors are complex
numbers, it just makes the algebra a little trickier
- if you only care about the "shape" of the solution, not the relative weightings, you
may not need to evaluate the constants.
Deviation Variables
Most real process variables are functions of time. Typically when if the system is affected
by a disturbance, the values will fluctuate around a value, sometimes slightly higher,
sometimes lower, but in the end it will commonly settle in on that value, which would be
a new steady state. When we are controlling a system, we want to make small
compensatory changes to flows, etc., to try and pull the process back to set point. It
makes more sense to calculate the change needed rather than calculating the new valve
position from scratch every time. This is made easier if we keep track of how a variable
differs from its steady-state value instead of tracking its total value.
We can define any variable x as the sum of two parts: the average or steady-state value
(x
ss
) and the deviation or perturbation from that value (x, x or Y ).
( ) ( ) ( ) t x t x t x
ss
A + = or rearranged ( ) ( ) ( ) t x t x t x
ss
= A
In control system analysis, we are typically more interested in the deviation variable
(a.k.a. perturbation variable), so it is common practice to rewrite systems in terms of
deviation variables.
Many analysis techniques (such as Laplace transforms) are limited to linear systems.
Linear systems have certain big advantages when using perturbation variables:
- constant terms in many ODEs vanish
- if we use perturbation variables and linearize around the steady-state, initial
conditions are zero
These may be best illustrated by examples.

Example:
Take the equation
( ) t m c x
dt
dx
+ = +
into deviation variables. Begin by noticing that at steady state we get:
( ) ( ) 0 0 m c x or m c x
ss ss
+ = + =
What we do next is to subtract the steady state equation from the ODE;
( )
ss ss
m t m c c x x
dt
dx
+ = +
And then substitute the deviation variable for any difference between actual variable
and steady state variable;
( ) t m x
dt
dx
A = A +
The problem here is that the differentiation doesnt contain the deviation variable, but if
differentiate the deviation variable we get:
( )
dt
dx
dt
dx
dt
dx
dt
x x d
dt
x d
ss ss
= =

=
A

because the steady state value is a constant and hence have a derivative that is zero. So
we have that we can write the ODE with all deviation variables:
( ) t m x
dt
x d
A = A +
A


The additive constant "biasing" terms have vanished. Also notice that using deviation
variables means that we do not need a value for the steady-state positions of x and m to
solve the equations, or rather they naturally become zero.

Exercise:
Show that the last statement is correct by taking the Laplace transform of
dt
x dA
.
Lec 5: Transfer functions and block diagrams
Learning objectives: To introduce the students to the transfer function and
block diagrams and its use in process control.
Learning outcomes: On completion of this topic, students must be able to
carry out manipulation of transfer functions to block diagrams and vice versa
and carry out block diagram manipulations.
A block diagram is a common way to represent a dynamic system. In these, signals
(variables) are represented by lines and functional relationships (transfer functions) by
blocks.
So if we return to the example with deviation variables from the previous chapter:
( ) t m x
dt
x d
A = A +
A
.
Carrying out the Laplace transform we get;
__________________________
This might be translated into a transfer function (G(s)) by collecting all variables on one
side we get:
( )
( )
( ) 1
1
+
=
A
A
=
s s M
s X
s G
These ODE and the transfer function are two different representations of the same
equation. A third approach is to represent the system with a block diagram:

This diagram shows that X(s) (the output) is produced by the transfer function in the
block acting on the input M(s), the equation given by:
( ) ( ) s M
s
s X A
+
= A
1
1

When you read an equation from a block diagram, the easiest way is to start at the output
(often on the right) and work backwards adding in the elements as you see them.
Block diagrams make it easy to represent the connection patterns between processes, and
provide a way of visualizing that connection and converting it to math.
Let us revisit the heated tank:

For which we have the differential equation:


The output variable of a system is the variable that commonly appears in the
differentiation, which means that the output variable is:________________________

And all the other variables are therefore inputs (whether they are disturbances or
controlled or manipulated variable). In this case it means that the disturbance is :_____
and the manipulated variable is:____________.

Write the steady state material balance:____________________________________

Subtract it from the original ODE:________________________________________

Specify the deviation variables and insert it into the ODE:_____________________

As we can see there is not really much change of the ODE, but as we remember we do
this because_______________________.

We saw initially that we had two different inputs, which means we must have two
transfer functions as well
( ) q Vs c +
1
and
( ) q Vs
q
+
.

Most transfer function can be separated into two different parts depending on whether
they will determine how they affect the final change (difference between initial and final
steady state) and how they affect the behavior between the steady states. The constants
that affect the size of the change are the gains, commonly denoted by Ks, and the
constants regarding the behavior are the time constants, s.
The standard form of representing the gains and the time constants in the transfer
function is:
( ) 1 +
=
s
K
G
t
or if we have two time constants
( )( ) 1 1
2 1
+ +
=
s s
K
G
t t

To use the standard definition we have to rearrange the transfer function to the standard
form, which for the previous examples means:
in
qT
c
u
qT
dt
dT
V + = +

( ) ( ) ( ) q
V
q c
K
s q V
q c
s q V q c q Vs c
G = =
+
=
+
=
+
=
1 1 1
,
1
1
1
1
1 1
t



and for the second part we get

G
2
=___________ which gives K
2
=___________ and
2
=_____________

To have a quick look at how the time constant and gain affects the behavior of the system
we study the part regarding heating elements effect on the temperature in the heating
tank, we have:
u
s
K
T
1
1
1
+
=
t

Its common to assume that the heat changes according to a step change, which means
that U(s)=a/s where a is the magnitude of the change, for simplicity assume a=1.
s s
K
T
1
1
1
1
+
=
t
partial fractioning gives
s
K
s
K
T
1
1
1 1
1
+
+

=
t
t
and if we carry out the inverse
Laplace transform we get:
1 1 1
1
) ( K e K t T
t
+ =
t
t
One part is exponential (assuming that
1
>0), which will disappear as the system
progresses and the final change will just be K
1
.

Exercise:
Make a simple plot of T(t) from t= 0 s to t= 20 s for the four possible combinations of
values for
1
= 0.5 and 1 and K
1
= 1 and 2.
Block diagram
Block diagram is a very useful tool to graphically represent the transfer functions,
especially when it comes to system with interconnected transfer functions. Block
diagrams can be used to show an array of mathematical operations; summing, subtraction
and multiplication, which are then combined to more advanced transfer functions.
Summing junctions are used to show addition


2 2 1 1 2 1
G u G u y y y + = + =

or subtraction


2 2 1 1 2 1
G u G u y y y = =
Which, can, with the addition of a splitting point, can be used to represent parallel
systems;



If we just study the input-output system we could represent the system with just a single
block that describes the system.

We must have that the block is representing;


A case that commonly turns up when we have to deal with summing points is distributive
properties. This can be seen as doing a block expansion, getting more block than we
started with:

( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) s U s G s G
s U s G s U s G s Y s Y s Y

2 1
2 1 2 1
+ =
+ = + =
( ) ( ) ( ) ( ) ( ) ( ) ( ) s U s G s G s U s G s Y
2 1
+ = =
( ) ( ) ( ) s G s G s G
2 1
+ =

Exercise: Draw the distributive equivalent of the system.
Before we go onto more complex systems, we introduce the case with transfer functions
in series as well;

This diagram is written with a help function y
1
, which might be a helpful idea to use
when you come to block diagram reduction. This means that we can use what we learnt
so far by writing this system as two separate systems:

But if we use the definition of the help function y
1
=u
2
we get:


This means that blocks in parallel are the same thing as multiplication and just as
multiplication they are associative and commutative.
The summing, subtraction, parallel system, series systems and the distributive properties
are basically all we need to be able fro block scheme reduction or to determine the overall
transfer function of a complex system. But we commonly include the feedback system as
standard procedures as well as they turn up so often. The simplest feedback system looks
like this (this is not totally true as we could make it even simpler by putting H(s)=1):
2 2 2 1 1 1
u G y and u G y = =
1 1 2 2
2 2 2 1 1 2
u G G y
u G y and u G u
=
= =

The note that should be made of how to simplify this diagram is that what is fed back is
the output itself, hence we could draw diagram as:


This becomes a very straight forward case giving us the following algebraic equation
describing the system_________________________.
Now the only thing we have to do is to rearrange the equation to get all the outputs on the
left hand side of the equation and the input on the right hand side of the equation to get:


( )
( )
( ) ( )
( ) s R
s H s G
s G
s Y
+
=
1
We could also have so called positive feedback, where both signs at the summing point
are positive, which essentially means that the transfer function H(s) is multiplied by -1. It
will have the same effect on the final transfer function, hence getting a negative sign in
the denominator.
When dealing with feedback system we occasionally come across open loop transfer
function, which is the transfer function going from the y(t) that is fed back to the y(t) of
the output. For this transfer function the variable L(s) is used and hence for the simple
case above we have: L(s)= G(s) H(s).
Block diagram reduction
Block diagram reduction is a very important tool in process control to enable to simplify
the system (in the sense of the number of blocks) and enables us the get one single
transfer function that is easier to handle in the continuing applications of process control.
When we are dealing with simple straight forward systems and/or systems with a single
feedback loop, we can just start applying the rules of summing, subtraction, parallel,
series, distributive and feedback in a direct block diagram manipulation.
The alternative is to carry algebraic manipulation instead, which basically means that we
keep going from left to write adding in the different inputs as we go along and using
addition, subtraction and multiplication as it comes and in the end getting an algebraic
equation that need some manipulation to collect all inputs and outputs on opposite sides.
Exercise/Example:
So let us study the two different techniques for the following feedback system with
disturbance where the different blocks represent different functions in a normal control
system:

Start with the easy things, we see three block in series that we can change into one:

For simplicity we put d=0 and get:

For which is just have to apply the feedback law to get the transfer function:_________
Than we repeat the procedure to get the transfer function from d to y, starting with
putting y
sp
=0 and rearranging the block diagram.

For which we again apply the feedback law to get the transfer function:____________.
We should note that we have got the same denominator in both transfer functions, which
is due to the two systems having the same open loop transfer function L.

Now lets try doing with algebraic manipulation.
Start from left with y
sp
followed by a negative summing of yG
M
that gives: __________
Next we go through G
c
, followed by G
v
, and finally G
p
that we multiply with the
previous transfer function to get:______________________
We come to another summing point where we add dG
M
to get___________________.
And then we finally reach y so we set our previous function equal to y to get
y=(y
sp
-yG
M
) G
c
G
v
G
p
+ dG
M

Finally we rearrange it to get y=________ y
sp
+_____________ d.

Occasionally we will come across even more complicated systems for which we will
need another tool or trick, which is moving of summing places, which is fairly often
complemented by introducing help functions. This is done to simplify the block diagram
and enable us to use the standard techniques of block diagram reduction or algebraic
manipulation.

Example/Exercise:
The following diagram is actually a standard diagram for a feedback/feedforward control
system. The algebraic manipulation can be carried out directly starting from the left at
r(t) going straight to y(t) and carrying out the algebraic operators along the way and the
carrying out the algebraic manipulation:

Y(s)=______________________________________________

If we, on the other hand, wish to carry out block diagram reduction it would be
troublesome when we come to finding the transfer function G
2
=Y(s)/V(s) as it have two
different entries into the loop, while we should be able to write down the transfer function
G
1
=Y(s)/R(s) after just a quick look at the diagram and using the formula for feedback
loops.

G
1
=_____________________________________

If we just had one summing point for G
2
it would be just as straightforward. When we
remove summing points we have to be sure to compensate for whatever changes we are
incurring so we dont change the system outlined.
Lets take a look at the part of the diagram that we want to change:

The goal is to remove one of the summing points and carry out compensation for the
changes so that overall transfer function remains unchanged.
It doesnt matter which summing point we remove but the tendency is to keep the one
furthest to the right (ii in this case), which means connect signal (3) to summing point
(ii). We see that if we draw the line (3) straight into (ii) we dont have the signal passing
through the Gu block and we would have altered the system. So as soon as we bypass a
block we have to compensate by multiplying by that transfer function:

As we end up with a system of parallel block we simplify to get a single block:
G
v2
=F
f
G
u
+G
v


The opposite will be the case if we move the other way around, i.e. move signal (4) to
summing point (i) as it makes the signal pass by an extra block. So as it passes an extra
block, we compensate by dividing the original function by the extra transfer function.

This is also a case of two parallel blocks that can be simplified in the same way:

G
v2
=____________________________________

After that its just a matter of drawing the modified block diagram and carry out the
approach from the previous example, i.e. making the v be the only input and imagine or
redraw the system on the standard feedback form and end up with a single transfer
function, G
vv
, for the relation between Y(s) and V(s).

G
vv
=_______________________________________

In the previous examples the block diagrams could be solved directly with the algebraic
manipulation not needing to do any manipulations, which might imply that its superior
to the reduction method. However, this is not always the case especially if we start
having internal loops in the system, like the diagram below. When that is the case we
have to simplify the loop to a single block.

We study the part by itself:

which obviously is a feedback loop and can be simplified by the normal reduction
method to get G
fb,internal
=_________________
When that is done, we just substitute it into the block diagram in the place of the loop that
we studied.

References:

Smith and Corripio: Chapter 3
Seborg, Edgar, Mellichamp: Chapter 4, Chapter 11
Marlin: Chapter 4
Lec 6: Linearization
Learning objectives: To introduce the student to response chart of
processes and the method to linearise non-linear transfer functions.
Learning outcomes: At the end of the lesson, students should be able to
draw a response diagram, relate the look to transfer function parameters and
to linearise non-linear transfer functions.
Many of an engineer's tools for analyzing dynamic systems apply only to linear systems.
The Laplace transform, for instance, only works if the equations to be transformed are
linear, except for the time variable t.

What makes an equation "linear"?
- all variables present only to the first power
- no product terms where variables are multiplied (constants are ok)
- no square roots, exponentials, products, etc. involving variables

These can be understood by looking at some examples.
( ) t m
dt
dx
a = or ( ) t m a
dt
dx
= are linear as long as a is a constant and m(t) is linear.
( ) t m
dt
dx
a = is nonlinear because of the square root.
( ) t m
dt
dx
a x x a
dt
dx
a = + +
2
3 2 1 2
1
1
is nonlinear because of the cross-product x
1
x
2
term, while
( ) t m x
dt
dx
a
dt
dx
a = + +
3
2
2
1
1
is linear when m(t) is linear.
2
t
dt
dx
a = is linear when a is a constant.

Linearity is useful, because if f(x) is a linear differential equation the following
statements are true:
1. If x
1
is a solution to the equation and c
1
a constant, then c
1
x
1
is also a solution
2. If x
1
and x
2
are solutions to the equation, then x
1
+x
2
is also a solution.

The latter means that for a linear process, the result of two input changes is the sum of the
results of the individual changes.

Making a Model Linear
Many chemical engineering systems are highly nonlinear and general methods for
working with nonlinear models are few, so it is important to know how to approximate
nonlinear equations with linear ones.

The approach is really pretty straightforward:
First, expand all nonlinear terms in a Tayalor series, usually around the steady state value
(a):
( ) ( )
( )
( )
( )
( )
( )
( ) +
' ' '
+
' '
+
'
+ ~
3 2
! 3 ! 2 ! 1
a x
a f
a x
a f
a x
a f
a f x f
Second, truncate the expansion after the 1
st
order terms (i.e. remove the non linear parts)
This gives a general result for linearizing equations:
( ) ( ) ( )( ) a x a f a f x f ' + ~



Notice that when you linearize, you do so around a specific point. Choice of this point is
important. If the linear version of your model is to work, you must be operating close to
the chosen point, so that you remain within the region where the linear approximation is
valid. The steady state value is the usual choice since control systems are most often used
to reject disturbances moving the plant away from steady state.

Example:
Linearize: around x=1.
The nonlinear terms are on the right hand side and its usually easier to linearize them
separately:





Linearization in higher dimensions
The principle is the same, but just as mentioned earlier we have nonlinearity as soon as
we have any multiplication or division between variables as well.
We have to linearize regarding all variables that are present in the nonlinear function but
only those that are nonlinear, so if z, u, and y are variables and f(z,u,y)=zu+y the
nonlinearity is just zu and hence the only part we have to linearize.
To do this we would have to carry out Taylor expansion in all dimension that are
nonlinear:
( ) x x x y sin 2
2 3
+ =
( ) ( ) ( ) ( )
( ) ( )( )
x y
x x x x f x x x f
x x x x x f x x f
45 . 7 76 . 4
1 1 cos 2 1 sin 4 1 sin 2 sin 2 _______ __________ sin 2 ) (
3 2 1 1 3 1 3 ) (
2 2
2 3 3 2 3
+ =
+ + = = ' =
+ = + ~ = ' =




and keep adding terms as the dimensions increase.

Example:






Linearization and deviation variables
Linearization in combination with deviation variables has particular advantages. Recall
that a deviation variable is defined as
( )
0
x t x x = A
so if we linearize aound the steady state, the value of the deviation variable will be zero.
All of the "pure constant" terms will then vanish from the linearized deviation variables.
Moreover, when we switch to perturbation variables and then linearize around the steady-
state, the initial conditions are zero. This means we can drop the x(0) terms as we take
Laplace transforms.

Example: Take the two variable case above into deviation variables.



where z
0
u
0
will disappear as well due to the introduction of the deviation variable in the
first place.

The last example showed that by combining linearization with perturbation variables, you
effectively change your nonlinear side of the differential equation:



Dynamic behavior

The dynamic behavior in this section is mostly dealing with how an uncontrolled system
changes from one steady state to another caused by the change of one or more of the
inputs. In its simplest form its just a matter of plotting the output for the input change,
but we want to do a little bit more than that. We also study what is the common behavior
of a system and what can be deduced if we are just given the response diagram (the plot
of the output).
But before we can look at the behavior of the system we take a look at the most common
ways to describe the input.
) ( ) ( ) , ( ) , (
0
,
0
,
0 0
0 0 0 0
u u
u
f
z z
z
f
u z f u z f
u z u z

c
c
+
c
c
+ ~
) ( ) ( ) , (
) ( ) ( ) , ( ) , (
) , (
0 0 0 0 0 0
0
,
0
,
0 0
0 0 0 0
u u z z z u u z u z f
u u
u
f
z z
z
f
u z f u z f
zu u z f
u z u z
+ + ~

c
c
+
c
c
+ ~
=
u z z u u z u z f
u u z z z u u z u z f
A + A + ~
+ + ~
0 0 0 0
0 0 0 0 0 0
) , (
) ( ) ( ) , (
u
u
f
z
z
f
u z f
u z u z
A
c
c
+ A
c
c
~
0 0 0 0
, ,
) , (
Process inputs

Step change
A step change is a sudden change of a function from one value (commonly zero) to
another value at time t=a (also commonly at zero)
The mathematical description of the function (), its Laplace transform (, note that the
exponential function will disappear if a=0) and diagram of the function:


Ramp change
A ramp change means that we have a change occurring where the value keep increasing
linearly as time progresses. The ramp could be delayed as well so that it change appear at
a time after t=0.


Impulse
An impulse is, by definition, a change that occurs during an infinitesimal (basically 0)
amount of time and has a magnitude of the change being infinite. This is simulated by
making a rectangular pulse occurring during a time a and with a magnitude 1/a while we
are trying to get the limit a0 . This is basically impossible to achieve (just imagine to
having to bring a Temperature or a flowrate to infinity in zero time), but the dynamic
behavior of the system due to a impulse change have many handy properties, which is
why its included here as well as why its used in industry.

( )
( )
s
e
s
a t z
a t
a t
as
= E

> A
<
=
0
o
( )
( )
2
0
0 0
s
a
s X
t at
t
t x
R
R
=

>
<
=
( ) 1
0
0
1
0 0
=

=
=
=
s X
a when
t
a
t
x
I
I
Sinusoidal input
Periodic variables also turn up quite frequently, for example if we were trying to control
the temperature in a swimming pool, the ambient temperature would vary periodically,
with a 24 hour rhythm.


Rectangular pulse
A rectangular pulse is basically a step change that just occurs for a limited amount of
time. The modeling of the pulse is basically done by taking the difference between to step
changes:

And in mathematical form:







To obtain Laplace transform we carry out the same steps as for the graphical derivation
above, giving that x
RP
= (t)- (t-a) which can easily transformed using tables.

( )
( )
2 2
0 sin
0 0
e
e
e
+
=

>
<
=
s
A
s X
t t A
t
t x
( ) ( )
as
RP
RP
e
s
z
s X
a t
a t z
t
x

A
=

>
< s A
<
=
1
0
0
0 0
1st Order Systems
A first order system is described by



In this model, y represents the measured and controlled output variable and x(t) the input
function. (Referred to as the forcing function of the ODE.)
To simplify the problem (for the Laplace transformed case) we subtract the steady state to
get it expressed using deviation variables instead:



The equation is often rearranged to the standard form



This model is linear as long as x(t) is not a function of y thus it can be transformed into a
transfer function





This type of transfer function is known as a first order lag with a steady state gain of K.

Step Response
If we let the forcing function (X(s) or x(t) depending on what framework we are dealing
with) we get the output response that is known as a step response of a 1
st
order system.
The shape should over the rest of the course become very familiar, as any 1
st
order
system forced by a step function will response of this shape. The unit step response of a
system with time constant 1.0 is shown in the figure. "Unit step response" means that the
forcing function (the step) has magnitude 1.0 (i.e. x(t)=1 or X(s)=1/s.

c t bx t y a
dt
t dy
a + = + ) ( ) (
) (
0 1
) ( ' ) ( '
) ( '
0 1
t bx t y a
dt
t dy
a = +
) ( ' ) ( '
) ( '
t Kx t y
dt
t dy
= + t
) ( ) ( ) ( s KX s Y s sY = + t
) (
1
) ( s X
s
K
s Y
(

+
=
t
As the system approaches steady state, the response approaches a constant value. In the
plot, this value is 1.0 (which is the value of the gain of the process). If we looked at a
general case (with a forcing step function of magnitude z) we get:



and then carried out the inverse Laplace transform we have:
( ) ( ) t / exp 1 ) ( t z K t y A =
which indicates that the final value will be given by Kz.

Initial Response
The initial response (time close to zero) has a slope of 1.0. This is true of all first order
systems.

Time Constant
Next, consider what happens to the function when the elapsed time is equal to one time
constant (i.e. when t=)
632 . 0 ) 1 (
1
= =

e y
Thus, when one time constant has elapsed, the process output will have achieved 63.2%
of its final value (in the plot, 0.632).

The response will keep getting increasingly closer to the new steady state value, but
technically not reaching it in a finite time space. So we need an approximation of when
the steady states have been reached, which is commonly specified by getting to within
1% final value. As it turns out we can get a close approximation of that time by using 5
as that would give:
1% 0067 . 0 ) 1 (
5
~ = =

e y

Ramp and Impulse Response
Briefly, let's take a look at the response of the first order process to two additional types
of inputs.
First, consider a ramp function, C
A0
=Rt. Then



Partional fractioning and inversed Laplace transform yields;



Which is represented graphically as:
( ) ( )
s
z
s
K
s Y
s
z
s X
A

+
=
A
=
1

t
( ) ( ) ( )
2 2
1

s
z
s
K
s Y
s
z
s X t z t x
A

+
=
A
= A =
t
( ) ( ) ( ) t o t t
t
t
A + A = + +
+
=

t t z K e z K t y
s
C
s
B
s
A
s Y
t /

1
) (


The straight line represents the input and the initially curved is the output. From this we
can see that the response is having an initial transient period, which is analogous to the
transient behavior of the step change and when the response settle in at a constant slope it
happens with a shift equal to t (this is assuming that Kz is unitary, else we would get a
different slope of the response compared to the input)

Notice that the impulse response is the derivative of the step response. In some cases, it is
easier to find the impulse response function by taking the derivative of the step response
than by integrating the impulse forms.

Response of 2
nd
Order Systems
The first-order system considered in the previous section yields well-behaved
exponential responses. Second-order systems can be much more exciting since they
can give an oscillatory or under damped response.
We will consider only linear second order function ODEs that have constant
coefficients:
c t bx t y a
dt
t dy
a
dt
t y d
a + = + + ) ( ) (
) ( ) (
0 1 2
2
2

To simplify we can subtract the steady state:
c bx y a + = ) 0 ( ) 0 (
0

and apply deviation variables to get
0 2 0
1
0
2
2
2
2
,
2
,
) ( ) (
) (
2
) (
a
b
K
a a
a
a
a
where
t KX t Y
dt
t dY
dt
t Y d
= = =
= + +
, t
,t t

This yields the following transfer function:
( ) ( ) s X
s s
K
s Y
1 2
2 2
+ +
=
,t t

Which have the following poles:
t
, ,
t t
, ,
t
1 1 1 1
2
2
2
2
1
1

= =
+
= = s and s
The two roots 1/
1
and 1/
2
is of course a simples description of the case which we can
use when roots are easily found and specified to get the second order system described on
the form:
( )
( )( )
( ) s X
s s
K
s Y
1 1
2 1
+ +
=
t t


As we save in chapter 4, if the pole (i.e. the root of a denominator) is real and negative or
imaginary with a negative real part we got an exponential decay which would mean that
the response would reach a steady state. If that were not the case we got exponential
growth towards, positive or negative infinity.
The behavior can hence be very easily deduced for the transfer function described by
1

and
2.,
as both of them have to be negative (or negative real parts) for stability.
For the case when the second order system is described using and , we can see from
the equations why they were chosen the way they were as it has made the stability of only
one variable; .
In both of the square roots inside the roots we have -1, which means that the square root
gives rise to a imaginary root only when -1<<1 .
If we start with looking above that value, i.e. >1, we will negative real roots as the value
of the square root is always smaller than , which means that +-1<1 and hence
exponential decay in a form that is referred to as an overdamped system.
If =1, we get a double root and a behavior that have a very similar behavior to a first
order system and is referred to as a critically damped system.
The last case of real poles are when -1, which of course means that the roots are
positive and hence exhibit exponential growth and is called a runaway system
When we have a case where the poles are imaginary, for stability we need to be in the
interval between 0 and 1 as directly determines the whether the roots real part is
positive or not. The system is then called an underdamped system. The common response
diagrams for these systems are shown in the figure on the following page. Before we go
on and study underdamped systems we not that if is in the interval between -1 and 0 we
get a positive real part and hence exponential growth, but with oscillations as its a
imaginary pole.

Underdamped systems
The most commonly studied case is the underdamped system as they are the most
commonly occurring systems in process control. Hence a number of terms are used to
describe the underdamped response quantitatively. Equations for some of these terms are
listed below for future reference. In general, the terms depend of and/or . They were all
derived from the step response formula, which is obtained the same way as for the 1
st

order system, step is applied as a forcing function followed by partial fractioning and
inverse Laplace transform:
( ) ( ) ( )
|
|
.
|

\
|

|
.
|

\
|

A = |
t
,
,
o t t t z K t y sin exp
1
1

2

Where
|
|
.
|

\
|

=

=

,
,
|
t
,

2
1
2
1
tan ,
1
; however, the mathematical derivations are left
out.





The characteristics of the underdamped system is quite obvious from the graph above.
The response is quite slow to start with and then picks up with an increasing slope, it
overshoots the final value and start oscillating around what is the final value (the new
steady state) with decreasing amplitude to finally settle in at a new steady state.
The following five terms are what we use to describe second order system, where we can
both use the function itself or read off the graph to determine the different terms:

1. Overshoot. Overshoot (or actually the relative overshoot) is a measure of how much
the response exceeds the ultimate value following a step change and is expressed as the
ratio A/B in the figure.


2. Decay ratio. The decay ratio is defined as the ratio of the sizes of successive
peaks and is given by C/B.

3. Rise time. This is the time required for the response to first reach its ultimate
value and is labeled t
r
, in figure.

4. Settling time or Response time. This is the time required for the response to come
within 5 percent of its ultimate value and remain there. The response time is indicated in
the figure by t
s
. The limits of 5 percent are arbitrary, and other limits have been used in
other texts for defining a settling time.

5 Period of oscillation, time between consecutive peaks or troughs.
First we have to determine the angular or radian frequency

The relation between angular frequency and frequency is the same as for SHM hence the
frequency (i.e. the inverse period) is :

The dampening ratio will affect all of the terms mentioned above as could seen analyzing
the transfer function, but also by a simple plot for various dampening ratios, .



Figure of the step response diagrams for different dampening ratios.
Lec 7 Dynamic response of more advanced processes
In the previous chapter we discussed the behavior of processes that could be modeled
using a first or second-order model. There are of course processes that are considerably
more complicated than those examples. These more advanced processes have the effect
on the transfer function that it either increases the order of the denominator, but it would
also be possible to have a function of s in the numerator.

Poles and zeros and their effect on process response
One feature of the first- and second-order processes described in the previous chapter is
that their responses are very easily deduced by studying the factors ( for 1
st
order and
and for 2
nd
order) in the denominator of the transfer function. Things wont be as
straightforward, when we have additional poles or zeros. These things are highlighted
through examples:

Example; a system with a third pole
We have a system with the transfer function:
( )
( )( ) 1 4 . 0 1
1
2
+ + +
=
s s Ts
s G
where we have added an additional pole, located at s=-1/T to a second order system with
= _____ and = _____. If the system is subjected to a step change we get the following
behavior for the system:


From the diagram we can see that the extra pole creates a slower system compared to the
second order system by itself (T=0). As we keep increasing T we ultimately get a root at
the same real value as the complex conjugate poles of the first order function (T=5) for
which the system is dominated by the real pole, and the system behaves in a similar
fashion to a first order system.

Example; a system with a zero
We have a system with the transfer function:
( )
( ) 1 4 . 0
1
+ +
+
=
s s
Ts
s G
where we have added a zero at s=-1/T with the same values for and as in the previous
example. Again we study the step respone behavior:


From the diagram we can see that an additional zero is increasing the overshoot
especially when the zero is close to the imaginary axis (i.e. when T is increasing).
The reason for the overshoot can be illustrated by assuming that y
0
(t) is the output for a
system without a zero (i.e. T=0), which means that the output including the zero
becomes:
( ) ( ) ( ) { } ( ) ( ) t y T t y t Y Ts L t y
0 0 0
1
1 + = + =


The derivative of the signal y
0
(t) would initially be of the same magnitude as y
0
(t) itself,
but the multiplication with T means that the system with zero will get a greater addition
the greater the value of T is. This explains why we get an increased overshoot if we add a
zero close to the imaginary axis. From the figure we can see that y
0
(t) is approaching a
constant value as time progresses, which means that the derivative of y
0
(t) approaching
zero. With the derivative disappearing for large times we can see that the final value of
the system with a zero is the same as the one without the zero.

Zeros placed to the left of the imaginary axis is not very common in physical systems, but
the will turn up in most standard feedback controllers, as we want as fast behavior as
possible, which is obviously achieved with a zero as seen in the figure. If we instead add
a zero to the right of the imaginary axis (i.e. a positive zero) we get a case that is more
common for physical systems.

Example: a system with a positive zero
The transfer function we will use is made up of two additive systems, one has a slow
dynamics (( ) ( ) ) which will force the system in the positive direction and
another system that is fast (and with an amplification of T) which is force the system in
the opposite direction.
( )
s
T
s
T
s G
2 . 0 1
1
1
1
+
|
.
|

\
|

+
+
=
The overall effect is that we get a positive zero; s=T. And we can see from the step
response figure below that the system initially becomes negative as the zero is positive,
before the slower dynamics of the other part makes the system positive. The magnitude
of the negative overshoot is increasing with the value of T. If T is chosen negative we get
the same behavior as we had for negative root.
If we, again, assume that y
0
(t) is the output for a system without a zero we get:
( ) ( ) ( ) t y T t y t y
0 0
=
This again means that we get the big effect of the derivative and that its amplified by the
size of T as well as the disappearance of the effect of the zero as the system converges.

Another way the system complexity can be increased is when the system has time delay
also referred to as a dead time, which commonly occurs in process where we have a
transportation over a distance, like fluid flow along a long pipe or if you have chemical
analysis for which you can get a fast reading, like a gas chromatographer.

Example: a system with time delay
A time delay system would be a system of the form:
( )
( )

>
<
=
a t a t x
a t
t y
0

The reason for the function being specified as following x exactly but with that there is a
delay of the value, so if we were dealing with a first order system we would get the
standard first order response diagram, but with starting from a instead of at 0.



Approximation of higher order systems
When an extra pole was added to the system we saw that for particular values we could
make the system behave very similar to a first order system. This is a fairly common
occurrence and will happen as soon as we a dominating pole, which will be the case as
soon as we have a pole that is closer to the imaginary axis then all other poles.
The addition of a positive zero was also described with the system then experiencing a
negative overshoot, which, if we neglect the overshoot itself, bears a resemblance to the
behavior of a system with time delay.
From this we could get the idea that we can simplify systems with positive zeros by
describing the zero as a time delay instead.
The simplification approximation of a higher order system can be highlighted by studying
a system that a multiple roots. The following figure has a pole repeated 5, 20, 100 time
and as the multiplicity increases, the system start to resemble a system with a time dealy:

This is the basis of the practice of approximating higher order systems by the use of time
delays. The only thing needed is an approximation relation between time constants and
time delays, which can be achieved by studying the Taylor expansion of a time delay (i.e.
an exponential function):

! 3 ! 2
1
3 3
0
2 2
0
0
0
s t s t
s t e
s t
+ =


which means if we just keep the two first of the expansion we get:
s t e
s t
0
1
0
=


and with opposite signs:
s t e
s t
0
1
0
+ =
This is the simplest form to approximate a zero. To approximate a pole we use that

which gives:

s t e
e
s t
s t
0
1
1 1
0
0
+
= =


Those are the formulas that are commonly used to simplify a high order function down to
a system of first- or second-order with time delay. This is done by substituting the least
dominant time delays (poles farthest away from the imaginary axis) by using the Taylor
expansion above backwards (i.e. 1+5s becomes e
5s
or 1/(1+5s) becomes e
-5s
).

Simple Taylor expansion approximation procedure
First decide on if you want a first order or second orders model as the final result. Then
you find the dominant pole (for first order) or the two dominating poles (for a second
order model) and move them to your model. After that you go ahead and change the other
poles and zeros into time delays using the Taylor expansions formulas.

Example/Exercise
Reduce the transfer function given below to a first order model with time delay:



As we want a first order we just keep the pole closest to the imaginary axis, which is the
pole with the highest time constant, which in this case is:_____________________
Than we go through the other poles and change them into time delays;
1/(3s+1)=__________ and 1/(0.5s+1)=_____________ and finally we do the same thing
for the zero(s) (1-0.1s)=____________.
This should give us a transfer function looking like this, which only needs the final touch
to combine the time delays into one.
( )
1 5
s G
5 . 0 3 1 . 0
+
=

s
e e Ke
s s s

If a second order model was desired we would keep the two most dominating poles
instead ending up with:
( )
( )( ) 1 3 1 5
s G
6 . 0
+ +
=

s s
Ke
s



( )
( )
( )( )( )

1 5 . 0 1 3 1 5
1 . 0 1
+ + +

=
s s s
s K
s G
Skogestads half rule
There is a tendancy, when using the Taylor expansion straight as in the previous case, to
over emphasize the time delays in the system. This would essentially mean that the
approximation wouldnt fit very well initially. To get a better fit for the initial behavior
Skogestad proposed that we take the largest neglected (i.e. the largest of the time
constants that we didnt keep) and add half of it to the smallest kept times constant, while
the other half is translated into a time delay using Taylor expansion. The other poles and
zeros are(is) treated in the similar way as for the Taylor expansion method.

Example
Reduce the transfer function given below to a first order model with time delay using
Skogestads half rule.:



Just as before the time 1/(5s+1) is kept.
That would leave us to neglect (3s+1) and (0.5s+1), but we are supposed to take half of
the biggest neglected time constant and add it to the smallest (in this case the only, this
becomes important when producing second order models).
So we get ({5+3/2}s+1)=6.5s+1.
After that its just a matter of carrying on like in the previous example with addition of
making sure not to forget the other half of the biggest neglected time constant:
1/(3/2s+1)=____________, 1/(0.5s+1)=___________ and (1-0.1s)=__________
and we get:



We can compare the step response behavior for the two methods and the actual case. In
the figure we can clearly see that Skogestads half rule creates a better approximation in
the initial stages creating a better approximation over all.

If we wanted a second order model, we would have taken out the two biggest time
constant to start with, (in this case 5 and 3) leaving 0.5 as the biggest neglected one. We
would take half of that and add to the smallest kept one. This would modify our kept time
constants to 5 and 3.25, while we would have to translate the time constant left over
(0.25) as a time delay together with the zero to get:


( )
( )
( )( )( )

1 5 . 0 1 3 1 5
1 . 0 1
+ + +

=
s s s
s K
s G
( )
1 5 . 6
s G
1 . 2
+
=

s
Ke
s
( )
( )( )

1 25 . 3 1 5
s G
35 . 0
+ +
=

s s
Ke
s

Exercise:
Use Skogestads approach to derive a first order with time delay and a second order with
time delay model for the following transfer function:




First order case:
- Find the biggest time constant:________.
- Find the biggest neglected time constant:_______.
- Add half of it to the biggest time constant to get the time constant used for the
model__________.
- Change all the other poles and zeros into time delays getting:
1.5+___+___+___+1=______
and we get the final function:



Second order case:
(You should get a time delay=2.15)



( )
( )
( )( )( )( )

1 05 . 0 1 2 . 0 1 3 1 12
1
+ + + +

=

s s s s
e s K
s G
s
( )
( )

1 5 . 13
75 . 3
+
=

s
Ke
s G
s
Pad approximations
The major advantage of the Laplace transform is that it translates the differential equation
models into transfer function models. However as a part of transformation we commonly
end up with exponential functions (due to time delays or an impulse). This is occasionally
causing some problem to describe the system in a simple form, which means we can
factor the transfer function into a form with just zeros and poles.
One solution would be to use the Taylor expansion we used in the previous case, that
however would generally not be considered to be exact enough. To increase accuracy we
use the simplest pole-zero approximation (i.e. approximating the exponential part with a
combination of poles and zeros) which is the 1/1 Pad approximation;
( )
s t
s t
s G
e
e
e
s t
s t
s t
0
0
1 / 1
5 . 0
5 . 0
5 . 0 1
5 . 0 1
0
0
0
+

= = =


Its called 1/1 Pad approximation because its first order in both numerator and
denominator.
Performing long division of the approximation we get:
( ) + + =
4 2
1
3 3
0
2 2
0
0 1 / 1
s t s t
s t s G
which if we compare with the Taylor expansion is correct in the three first terms. There
are higher order Pad approximations, for example the 2/2 Pad approximation:
( )
12 2
1
12 2
1
2 2
0 0
2 2
0 0
2 / 2
0
s t s t
s t s t
s G e
s t
+ +
+
= =



Interacting and noninteracting processes
Most of the systems that we have studied so far have been simple processes with a single
input and a single output, which commonly could be isolated and treated separately.
However many processes are not like that. Those processes usually have one or more
variables that interact with at least another variable, which creates a case of internal
feedback within the system, which is referred to as an interacting system. Most
interacting systems have more complicated transfer functions compared to the
noninteracting processes.

Example: noninteracting process
There is a dual tank system arranged as follows as seen on the following figure. The
flows through the valves are assumed to be linear relations on the following form:
q
i=
1/R
i
h
i
which means that we consider the valve acting as a resistance to the flow.
In Chapter 3 and 4 a single tank was studied and we got the following balance equation:
out in
q q
dt
dh
A = which for our cases will be:
i
i
i in
i
h
R
q
dt
dh
A
1
,
= where
i
is the number of
the tank with the following Laplace transfer function:
1
,
+
=
As R
R
Q
H
i
i
i in
i


The flow equation for the valves can also be transformed:
i i
i out
R H
Q
1
,
=

This gives four equations in total, one material balance and one valve equation for each
tank. As everything is happening in after each other, we could draw a block diagram
showing the different transfer functions connected in series from Q
in,1
to Q
out,2
or H
2
,
whichever output is of interest. For example if the level in the second tank was the
controlled variable we would get:
( )( ) ( )( ) 1 1 1 1 1
1
1
2 1
2
2 2 1 1
2
2 2
2
1 1 1
1
2 ,
2
1
1 ,
1 ,
1
1 ,
2
+ +
=
+ +
=
+ +
= =
s s
K
s A R s A R
R
s A R
R
R s A R
R
Q
H
H
Q
Q
H
Q
H
in
out
in in
t t
This could be generalized for any number of tanks in series as it would just be a matter of
continuing multiplying the additional transfer function together:
( ) ( )
[ [
= =
+
=
+
=
n
i
i
n
n
i
n
in
n
s
K
s A R
R
Q
H
1 1
1 1
1 ,
1 1 t

That is very easy to deal with if we wanted to find the behavior we could go straight for
partial factoring followed by the inversed Laplace transform.

Example: interacting process
Next consider an example of an interacting process that is similar to the previous
example. The tanks are still in series but now the level of tank 2 will affect the level of
tank 1 as the flow through the valve between the tanks will be given by:
) (
1
2 1
1
1 ,
h h
R
Q
out
= which you could derived using Bernoullis equation.

The balance equations will essentially be the same as before until we substitute the
definition of the outflow from tank 1.
( ) ( )
( ) ( )
2
1
2 1
1
2 2 2
2
2 1
1
2
2
2 1
1
1 , 1 1 2 1
1
1 ,
1
1
1 1 1 1
1 1
H
R
H H
R
sH A h
R
h h
R dt
dh
A
H H
R
Q sH A h h
R
q
dt
dh
A
Laplace
in
Laplace
in
= =
= =

Substituting the H
2
we get from the tank 2 equation into the tank 1 equation gives:
1 2
) 1 ( '

1 ) (
) 1 )( (
) (
) (
2 2
1
1 2 1 1 2 2
2
2 1 2 1
2 1
2 2 1
2 1
1 ,
1
+ +
+
=
+ + + +
+
+
+
=
s s
s K
s A R A R A R s A A R R
s
R R
A R R
R R
s Q
s H
a
in
,t t
t

This means we have gone from a first order description to a second order model with an
additional zero as well. The process can be reversed as well substituting the tank 1
balance into the tank 2 balance:
1 2
'
) (
) (
2 2
2
1 ,
2
+ +
=
s s
K
s Q
s H
in
,t t


The analysis has become more complicated for the interacting system compared to the
non-interacting system. The denominator is not directly factored into two time constants
but instead is in the form a quadratic equation, and it appears for both tanks and not as in
the previous case just for the overall system.

Multiple-Input, Multiple-Output (MIMO) processes
Most industrial processes will contain multiple inputs (manipulated variables) and
multiple outputs (controlled variables). These are referred to as MIMO systems
distinguishing them from the single-input, single-output (SISO) case. The modeling of a
MIMO system is basically the same as for the SISO system and follows the same steps,
there are just more variables to deal with.

Example: MIMO system
Lets return to trying to control the temperature in a tank by heating, but without the
volume being a constant.
The balances we get are:


( ) | |
( ) ( ) ( )
q q q
dt
dV
MB
T T q T T q T T q
dt
T T V d
EB
c h
ref ref c c ref h h
ref
+ =
+ =

:
:

with T
ref
=0
| |
q q q
dt
dV
MB
qT T q T q
dt
VT d
EB
c h
c c h h
+ =
+ =
:
:

First we expand the derivative in the energy balance using the chain rule:
| |
dt
dT
V
dt
dV
T
dt
VT d
+ =
The second term in the expansion contains dV/dt which is present in the MB as well and
hence the MB can be substituted into the EB and for the MB we not that V=Ah, which
gives:
( ) | |
| | q q q
A dt
dh
T q q T q T q
Ah dt
dT
c h
c h c c h h
+ =
+ + =
1
1

Linearization:
( ) | |
( ) ( ) ( ) | |
| | q q q
A dt
dh
T q q q T T q T T T q T q
Ah
q T q T T q q T q T q q T T q
Ah dt
dT
c h
c h c c h h c c h h
c h c h c c c c h h h h
+ =
+ + + + =
+ + + + =
1
1
1


followed by the Laplace transform gives:
( ) ( )
( ) ( ) | |
( )
( ) ( )
( )
( )
( )
( )
| |
0 , 0 ,
1
,
1
,
1
1
1
,
1
,
1
,
1
1
1
= = = =

=
+ =
+
+

=
+
+

=
+
+
=
+
+
=
+
=
+ + +
+
=
|
|
.
|

\
|
+
+
h c c h
c h
c h
c
c
c h
h
h
c h
c
h
c h
h
h
c h
c c h h c c h h
c h c h
T
H
T
H
s
A
Q
H
s
A
Q
H
s
A
Q
H
Q Q Q
A
sH
and
s
q q
T T
Q
T
s
q q
T T
Q
T
s
q q
q
T
T
s
q q
q
T
T
q q
Ah
Q T T Q T T T q T q
q q
T s
q q
Ah
t t t t
t

In other words the every single variable has to be taken into account, every single input
for every single output, which is why even the relation between H and T
c
and T
h
are
included even though they dont have any relation (unless the density is defined as a
function of temperature).


Chapter 8: Empirical models
To this point we have been modeling processes using fundamental principles (commonly
conservation relations), which is very valuable in determining the behavior of the system
and model the systems transient behavior. However, this approach has its limitation, in a
fractioning distillation column, we may have 50 plates and a trying to distill a mixture
with 10 components would mean that we have to write around 500 differential equations.
On top of that we would have to deal with an abundance of thermodynamic properties
(equilibrium conditions), rate processes (heat transfer coefficients) and model non-
idealities (tray efficiency). This means that many real processes are basically too
complicated to be worthwhile to model using fundamental models, which in some cases
are justified by the need of high accuracy or operation over a wide range of possibilities.
This chapter will instead present an approach that will enable us to model the system,
which would be particularly useful for complicated and commonly nonlinear systems.

The basic approach is to compare the relation between the inputs and outputs of the
system and determine a model that best fits that relation, which basically means that a
system with the following block diagram is studied. The common approach would be to
neglect the disturbance (setting it to zero), but it would technically be possible to treat the
disturbance as any other input as long as it can be manipulated in a way suitable for
modeling. In empirical modeling the transfer function is being built by making small
changes in the input variable followed by studying the resulting changes of the output
variable and then finding the parameters that best describes the changes described by the
results obtained.

Figure 8.1: Black box model used for empirical models

Model building procedure
The steps of determining an empirical model generally consists of the following steps:
1. Experimental design which can be broken up to the following steps:
a. Determine the model objectives, how will the model be used?
b. Select the variables of interest, specifically the input(s) and the output(s).
c. Evaluate available data and define what variable(s) to be measured and
how to do it.
2. Conduct the experiment following the experimental design.
3. Determine the model structure and model complexity (which you should be ready
to reevaluate). For example:
a. steady state or dynamic
b. first or second order
c. linear or nonlinear
4. Parameter estimation
a. Use the appropriate tools to evaluate the data
5. Evaluation and verification of model:
a. Check model accuracy, commonly using statistical analysis of both new
data and old (before experiment) data.
b. If the model is not a good fit, go back to 3 and try a new model.
c. Validation. Check if the model can accurately predict the behavior for data
that has not been used for the estimation.
Those are the general steps for empirical modeling, though we will mostly concentrate on
the parameter estimation part in this course.

Linear regression
It is generally desired to obtain dynamical models, which would be in the form of transfer
functions, like the one we would get from fundamental relations. However the general
procedure of modeling stays roughly the same whichever type of dynamics that the
model is based around, though steady state models would generally have less parameter,
as the change with time wouldnt have to be included in the modeling, which makes the
procedure easier to understand.
In this case we have run a serious of experiments varying the input and getting different
results of the output as seen in the following figure.

Figure 8.2: Steady state modeling: experimental data (input on x-axis and output on
the y-axis)

In this case a linear model was deemed appropriate after studying the obtained data,
which means that our model is y=mx+k and the parameter estimation should determine
the value of m and k.

This parameter fitting is done with the objective to minimize the difference between the
measured output values and the model output values you would get using the same
inputs.
So if we had three measurements y
1
for x
1
, y
2
for x
2
and y
3
for x
3
we would insert the
inputs into our model to get
1
=m x
1
+k,
2
=m x
2
+k, and
3
=m x
3
+k, where
i
is the
output from the model. The task would then be to choose m and k to minimize: y
1
-
1
, y
2

2
and y
3

3
. The question is which way to minimize them, get the largest deviation to
be as small as possible or the sum of all the differences.
The most commonly used method is the least square method, where the sum of the
squares of all the differences is minimized, for the previous example the task is to
minimize: (y
1

1
)+(y
2

2
)+(y
3

3
), which is done by varying m and k. This will
give you the best fit for the model that it has been applied to, but is no guarantee that it is
the best model for the data. This can be demonstrated by going back and looking at the
example where a straight line was fit to a series of measurement data in Figure 8.2 and
use another model. The model complexity is commonly increased in steps, which means
the next model to try would be a quadratic model: y=ax+ bx+c. The complexity of the
parameter finding would have increased as there are now three constants; a, b and c, to be
determined. Figure 8.3 shows both the linear and the quadratic fit and it should be
obvious that quadratic fit is better than the linear one and there are actual ways to get a
numerical value of the fit to determine which is the best fit, when is not as obvious as in
this case.



Figure 8.3 Linear and quadratic fit to experimental data.

Note: Just because the quadratic model gives a better fit doesnt mean that it will be
favored choice, if the linear model is considered good enough it might be used as it will
generally create simpler calculations in the control synthesis.

Dynamical empirical modeling
When a dynamical model is desired there would be a need to look at the dynamic
behavior as the system is changing from steady state to another. This mean that its
required to measure continuously as the output variable is changing from one steady state
to another, which is commonly done by plotting the dynamic response diagram (seen in
chapters 6 and 7) also referred to as process reaction curve. The process reaction curve
could then be compared to different models to see which type it most closely resembles,
and then determine the parameters that describes that model. The common practice
however is to fit a first- or second-order model, both types with a time delay, to the
obtained data. The general look of the process reaction curve for a step change of those
models should be quite familiar by now, but it should be noted that the use inputs in the
form of an impulse are quite common as well, but is not part of this course.
The broad variety of methods based on step response identification belongs to
deterministic methods as the input signal is deterministic and random sources are not
considered in the process description. The results of the identification are coefficients of
the first- or second-order equations. Methods in this category aim at first estimates of the
process and provide information about an approximate process gain, dominant time
constant, and time delay.
The input signal used is a step change of one of the process inputs when all other inputs
are held constant. It is necessary that the controlled process is in a steady state before the
step change is instigated. The measured process response is a real step response that
needs to be further normalised for unit step change and for zero initial conditions and as
the identified process can in general be nonlinear, it might be advantageous to record
several step responses with different step changes and signs of the input signals.
Identification of a first order system.
The first order ODE system is described by:
Ku y
dt
dy
= + t
where the system is assumed to be at steady state initially, with y(0)=0 and u(0)=0 when
cases with deviation variables are studied. If the input u is abruptly change to z at t=0,
which means that u= z(t), the response diagram seen in Figure 8.4 is obtained.
It has been shown in chapter 6 that, when t=, the response have reached 63.2% of its
final value, which means that if the point where 63.2% of the final value is reached the
time constant can be directly read off from the response diagram (as seen by the dashed
red line in Figure 8.4).
In chapter 6 it was also shown that the solution to the ODE for a step change is
( ) ( ) t / exp 1 ) ( t z K t y A =

Taking the derivative of that function we get:

t t t
t
1 ) 0 (
) 0 ( ) ( =
A

A
=
A
=

z K
y z K
y e
z K
t y
t


which highlights the fact that the initial slope of the response will have a slope of 1/
(normalized case) and hence that line is drawn it will intersect with the final value when
t= (as seen by the green dashed line in Figure 8.4).

Figure 8.4; Normalized step response of a first order system
This means that process diagram have to simple properties for a first order system that
makes it straight forward to estimate the time constant. And z K t y
t
A =

) ( lim

which
means that the overall output change is a function of both the gain and the size of the
input.


Example:
The response diagram in Figure 8.5, is the temperature change after increasing an inlet
flow rate by 7 kg/m. Find an approximate first-order model for the process under the
current operating conditions.


Figure 8.5 Temperature response of heat exchanger for a step change in flow.
The data given means that z=___________
The change in the output is the difference between the initial and final value of the
output, in this case 1-0= 1.
And as K= Y/z, the gain is:___________.
Whether the 63.2% case is used or the initial slope they both yield a time constant:_____.
Consequently, the resulting process model is:

.
This is all very well as the theory gives us very good results, which are easily obtained.
However, very few cases are as straightforward as that, because;

1. The true process is usually neither first order or linear, and it might contain dead
time as well. Only the simplest processes exhibit such ideal dynamics.
2. The output data are frequently corrupted by noise, which means that the
measurement contains some sort of nonideality, like eddies in a mixing process
making the concentration vary throughout the turbulence or other electrical
instruments affecting the measurement instrument. If the noise happens to be
completely random, that is, it keeps alternating around the true value an averaged
out value might be used instead.
3. Another input (disturbance) might change its value during the data collection,
which generally will happen in an unknown way. The only way around this is to
run multiple tests.
4. It can be very difficult to achieve a real step change of the input, as there are
certain degrees of inertia in any system. For example, pumps wont open directly,
the temperature of stem input wont momentarily, pumps are not able to change
from on flow to another instantly and so on. The inputs might be more in line of a
ramp, which would actually be a good approximation as long as the time of
change is small in comparison to the time constant.

To still be able to model the system with a first order model an addition of a time delay
term is necessary, to take into account some of the characteristics that a higher order
system exhibits:



If we actually had a first order plus time delay (FOPD) system the system response would
look like in Figure 8.6. And the time delay would be directly read off from the the point
where the response becomes non-zero.


Figure 8.6: Response for a FOPD system.

This goes to show how the time delay is estimated, but as mentioned earlier, the actual
system is very seldom first order, so some modifications in determining, the three
variables defining the FOPD-models characteristics are needed.
1
0
+
=

s
Ke
G
s t
P
t
The steady state gain would be obtained in the same was before no matter how the
response curve look like, its just a matter of taking the quotient between the changes in
output and input: K= Y/z.

Tangent method
This method is based on the first-order behavior that a tangent drawn at the steepest point
(which happens to be the initial point for first order systems) of the response curve and
that the time between the intersect of the final value and the time axis is the time
constant, while the time up until the intersect with the time axis is the time delay as seen
in Figure 8.7.

Figure 8.7: Response diagram for a second order system


The major disadvantage of this method is to find the point of the steepest slope in the
response diagram as well as the time constant is modeled using data from a single data
point.

Two point methods
Sundaresan and Krishnawamy proposed a technique in 1978 where two points are used
for the calculation of time constant and time delay. The two points they recommended
was when the 35.3%, giving t
1
, and 85.3%, giving t
2
, of the final value is reached and
then using the following equations to calculate time constant and time delay:
( )
1 2
2 1 0
67 . 0
29 . 0 3 . 1
t t
t t t
=
=
t

These values are chosen to minimize the difference between the measured response and
the model response, based on correlation for many data sets.
There is another alternative, which utilizes the fundamental behavior of the first order
system using the information that the time constants is the value after 63.2% of the
change and that a third of the time constant will occur after 28.3% of the change giving
the following equations:

( )
1 2
2 1 0
5 . 1
5 . 0 5 . 1
t t
t t t
=
=
t


Example:
Figure 8.8 shows the response diagram for temperature change due to a valve position
change for one of the flows into a heat exchanger. Find an approximate FOPD model of
the system.

Figure 8.8: Response diagram for a heat exchanger (below) and a the valve position
change (above).

The input goes from 0.30 to 0.70, which means that z=____________.
The out changes from _____ to ______, which means that Y=____________.
Taking the quotient we get the steady state gain: K= Y/z=___________.

For the inflection method, the steepest slope is found and the tangential line is drawn to
determine dead time and time constant as seen in Figure 8.9.

Figure 8.9. Tangential line is added at the steepest inclination of the response chart.
The tangential line intersects the time line at t= 2.8, hence the dead time is 2.8. At the
other end the tangential line intersects with the final value at 30, which is the value for
the time delay plus the dead time, hence the time delay; =__________.
This gives the following model: U
s
e
Y
t
1 2 . 27
5
8 . 2
+
=



If the method of Sundaresan and Krishnawamy is used instead the times when 35.3% and
85.3% is reached has to be obtained as seen in Figure 8.10.

Figure 8.10. Using the method of Sundaresan and Krishnawamy.

Reading off the values we get that the two times are; 12.5 and 34.
Using the equations given previously we get:

( )
4 . 6 29 . 0 3 . 1
4 . 4 1 67 . 0
2 1 0
1 2
= =
= =
t t t
t t t

This gives us the following FOPD model:
1 4 . 14
5
4 . 6
+
=

s
e
Y
t
.

To compare the methods the response for the different models are subjected to the same
input as in the original response diagram, which can be seen in Figure 8.11. The figure
includes a previously not mentioned technique as well as the optional two point technique
mentioned in the two point section. It is quite obvious that the two point methods are
superior, in particular in comparison to the inflection method. Actually, the only time
when its worth using the inflection method is when the system modeled is actually first
order in itself.

Figure 8.11. The responses of the different FOPDT models compared to the actual
response.

Identification of a second order system.
Fitting a second order model to data is not as straightforward as it was when fitting a
FOPDT. One reason for that is of course that the second order systems can exhibit
different behavior depending on, the characteristics of the system as well as having more
parameters to be determined;



Commonly separating the cases into the ones with an oscillatory and the ones with that
have a damped behavior (specifically overdamped, but some cases of underdamped are
usually handled as well as long as the oscillations are not too great).

The oscillatory case

For the oscillatory case the damping ratio and characteristic time are determined by
using the formulas for overshoot and rise time.
Overshoot: OS=
2
1 ,
t,

e
Rise time: t
r
=
,
, t
t
arccos sin
arccos

The hardest part is generally regarding the rise time as it might be hard to determine the
instant when the response become nonzero exactly. This also carries over to case of
determining the time delay as the same point would be of interest. It is generally
( )( ) 1 1 1 2
2 1
2 2
0 0
+ +
=
+ +
=

s s
Ke
s s
Ke
G
s t s t
P
t t ,t t
considered so hard to determine the time delay exactly that retorting to trial and error is a
common approach, until a get a good fit between the experimental data and the model is
obtained. The gain is determined in the same way as for the first order case.


Damped behavior

For damped cases, a method due to Smith is used, requiring the times (with apparent time
delay removed) at which the normalized response reaches 20% (t
20
) and 60% (t
60
),
respectively and using Figure 8.12 to determine the parameters.

Figure 8.12. Smiths method: relationship of and to t
20
and t
60
.

Note: There are also other methods available where three times have to be determined
analogous to the FOPDT case.
Example/Exercise:
A step test data is given in Figure 8.13, for an input step change of 0.2 determine a model
using:
a) S&Ks method for FOPDT
b) Oscillatory case (over shoot method) for SOPDT
c) Smiths method for SOPDT

Change in output; y=____________
Change in input; z=____________
Gain; K=y/ z=____________

S&K method
Find 35.3%; t
1
=___________
Find 85.3%; t
2
=___________
=0.67(t
2
-t
1
)=__________
t
0
=1.3 t
1
-0.29 t
1
=_____________


Over shoot method
As its a normalized case (final value is unity) the size of the ovesrhoot can be taken
directly (else we would take OS/final value).
OS=0.45
Insert into the overshoot equation and take the natural logarithm of the equation.
ln 0.45= 1 / , t,
Rearrange to solve equation:
0.064(1-)=
=0.246
Dead time: Becomes nonzero at 4.6t
0
=4.6
Rise time: Time when final value first reached -4.6 = 4.05
= 2.16

Which give the transfer function model:

1 06 . 1 67 . 4
5
6 . 4
+ +
=

s s
e
G
s


Smiths method
Determine the point where 20% and 60% are reached. Remember to correct for the time
delay (use dead time of 5.0 to get value inside the diagram)!!!
t
20
=_____________
t
60
=_____________
t
20
/t
60
=____________

Look up the values in Figure 8.12
=_____________
t
60
/=___________
=____________
1 73 . 1 77 . 2
5
0 . 5
+ +
=

s s
e
G
s




Figure 8.13. Step response diagram
The result from the model fitting is summarized in the response diagram where the
models can be compared to the actual output.
Chapter 9: Measurements, Sensors and, control signals
Having discussed the different ways to model a system and describing it as a transfer
function as well as how to express a combination of transfer functions in the form of a
block diagram, we turn our attention to the other parts of the control system. The standard
feedback system is drawn as in Figure 9, where the modeling of the system has already
been dealt with the turn has come to look at the control element (the actuator) and the
measurement (the instrument). These parts were briefly mentioned in Chapter 1 as the
feedback control of the shower was discussed, where the measurement was the person
showering checking if they felt comfortable in the shower and the actuator was turning
the regulator for the heating element to change the amount of heating.
The system could be automated to instead include an actual thermometer measuring the
temperature, followed transmitting a signal for comparison with a set point (a
temperature considered comfortable). The signal from the comparison (the control error)
would then be used in the controller do decide on the appropriate action, expressed as a
signal that will make the actuator change an input variable that will affect the system.
This illustrates three important functions that must be carried out in a control system at
each control cycle:
1. measurement of one or more output variables,
2. manipulation of one or more input variables, and
3. signal transmission.
To be able to design that controller appropriately it is vital to understand and model those
three parts well, in other words the instrument has to be described as a transfer function
describing how the change from measured variable (y) to a control signal (y
measured
) which
can be compared to a set point and sent to the controller. The controller in turn creates
another signal that is transmitted to the actuator and again the transfer model of the
actuators relation between the control signal and its affect on a input variable has to be
constructed.
This chapter introduces the measurement characteristics as well as signal produced and
finally deals with the control element (aka the actuator) and shows how they are
described inside the control system.

Figure 9.1: Feedback block diagram

Control Signals
A control system needs to be able to pass information around the plant. Valves need to be
told whether to be wide open, partially open, or closed. Controllers need to know whether
the measured variable is where it is supposed to be. Standardized signals are used to
convey this information.
Signals can be digital or analog. Digital signals are encoded as binary numbers. Analog
signals vary continuously from small to large. Many plants use a mixture of digital and
analog; the age of the plant and control system usually determines what signals are used
where. Signal converters can be used to change from analog to digital (A/D converter) or
vice versa (D/A converter). These converters often exist as software or a chip built into
other hardware.
However, in this course we will stick with using the analog one, which is generally easier
to deal with in simulations as well as generally being easier to deal with. The signals can
be represented in three different ways, as percentages, as electrical current, or as
pressures in a pneumatic system.
- 0 to 100%
- 4 to 20 mA current
- 3 to 15 psig compressed air
Conceptually, many problems are best approached by thinking of the signal as a
continuously varying percentage; numbers from 0-100 or 0-1.0. It is usually easier to
solve problems with percentages first, and then convert to the appropriate physical signal.
Most analog signals today take the form of electrical current of continuously varying
amperage. Standard signals range from 4 to 20 milliamps. Older pneumatic systems use
compressed air of continuously varying pressure. Pneumatic signals are often commonly
used to operate control valves and elements.
Electrical signals are routinely converted to pneumatic (I/P transducer); pneumatic
signals can be converted to electrical as well (P/I transducer) although these are needed
primarily in older plants with legacy pneumatic hardware.
Notice that the physical signals don't start at zero. This is to provide an easy way to
distinguish between a minimum signal and a broken signal. When reading 4 mA, the
signal corresponds to 0%; while 0 mA, means that there is a problem in the system. There
is a common need to be able to translate between the different signal types. The general
principle are that they vary linearly so the matter of translating between them is a little bit
like changing between different temperature scales, as not all of the doesnt start at the
same point we have to correct for the change (Kelvin- and Celsius-scales) while other
have neither common starting point nor step length (Celsius- and Fahrenheit- scales).

Example/Exercise: Convert the following control signals:
1. 25% to pneumatic and electronic
2. 15 mA to percentage and pneumatic
3. 12 psig to electronic and percentage

1. A pneumatic signal ranges from 3 to 15 psig, or a difference of 12 units (the scale
length.
25% of the scale length is 0.2512 = 3 psig
This is the amount above the minimum signal, hence the total signal is 3+3=6.
A 25% signal in a pneumatic system is 6 psig.
The electric range is ______mA
25% of the scale length is 0.25____=_____
The total signal is ___mA + ___mA=_____
A 25% signal in a electrical system is 6 psig.
2. A 15 mA signal is 11 mA above the minium or in % 11/(20-4)=68.75%
The pneumatic equivalent is 3+0.687512=11.25 psig
3. A 12 psig signal is ____psig above the minimum or in % _____/_____=_____%
The electrical equivalent is __+__________=______ mA

The Instrument (Sensors and Transmitters)
The measurement instrument can basically be divided into three different components, as
seen in Figure 9.2, a sensor, a signal processer and a transmitter.

Figure 9.1 The measurement components
Sensor
The objective of the sensor is to produce a signal (mechanically, electrically or the like)
that is related to some physical property of interest, preferably in a linear fashion. A good
sensor would exhibit the following characteristics:
1. the sensor should be sensitive to the measured property, i.e. if the measured
property is changing the sensor should be able to pick up that change. The smaller
change that can be picked up the more sensitive is the instrument.
2. the sensor should be insensitive to any other property, i.e. the output of the sensor
shouldnt change when any other property is changing.
3. the sensor should not influence the measured property, an example of this would
be a thermometer that generates heat and hence heats up the surrounding of the
thermometer giving a higher reading.
4. the sensor should have a constant gain, this is basically saying that it should have
a linear behavior.

Signal processer
The main task for the signal processer in a measurement instrument is to condition the
signal to make the signal smooth and linear.
General processes in the signal processor are:
1. Filtering removing measurement noise to make the output smoother.
2. Correction Compensating for effects from out variables other than the one of
interest for the measurement.

Transmitter
The measurement device output must be a signal that can be transmitted over some
distance. Where electronic analog transmission is used, the transmitter output is 4 mA to
20 mA. Microprocessor-based transmitters, transmit the value digitally in engineering
units.

Static characteristics of measurement instruments
There are some important terms related to the static behavior of the sensor-transmitter
combination.

Accuracy
Accuracy is the difference between the measured value obtained by the sensor and the
true value. So when any measurement reading is obtained from the instrument we know
that the true value of the measurement lies in a region around the measured value, which
can be expressed through an equation:
True value = Measured value Accuracy
The most common way of describing the accuracy either through percentage or as an
absolute value, so that the true value of a temperature can be expressed either as 251C
or 25C 4% through those technique.
The values of the accuracy would be supplied with the instrument as well as specifying
under which conditions they hold for, known as the standard conditions of the instrument.
If the measurement went outside those conditions there are commonly different factors
available to enable correction of the measured values.

Precision (Repeatability)
Precision is the difference between the sensed values obtained for the same true value,
i.e. when the measured value remains constant what would be the difference between
different measurements.

Accuracy and precision
There is a major need to be able to distinguish between accuracy and precision as good
accuracy doesnt apply good precision and vice versa.
To illustrate this look at the target practice graph in Figure 9.2. Assuming that the aim is
in the centre of the target, the black dot is a measurement of the accuracy while the green
circle is the precision. This means for the left figure the accuracy is good as the black dot
is close to the centre, while precision is bad because even if the aim where in the middle
the shot would end up anywhere inside the green circle. The right figure dont have good
accuracy as the black dot is far from the centre, while the precision is good as the green
circle is small, meaning that the shots will appear inside the circle for a constant aim.
The difference between the precision and accuracy can also be illustrated using a
probability density as seen in Figure 9.3.


Figure 9.2. Target practice examples of accuracy and precision


Figure 9.3 Probability distribution of a measurement


Note: Precision is of considerably higher importance than the accuracy in control
networks. This is for the simple reason that accuracy problem can be easily corrected for
by doing an additional measurement to determine the accuracy and then carry out
correction. However if the precision is a problem there wouldnt be any way to determine
the correction that is actually necessary.
For example if the accuracy is zero, the reading would on average show the true value. So
any discrepancy would be due to the precision and any correction that the controller
would take would be due to the error caused by the precision, causing unnecessary
control action.

Other static properties
Users of a sensor/transmitter typically specify three values:
- The range specifies the boundaries of an operating region. This term is used
loosely and so it is important to distinguish between the:
o the instrument range which is characteristic of the device and set by
tolerances, materials of construction, etc. (0 to 500 C can be seen without
mechanical failure)
o the operating range or calibrated range which the device is set to detect
(for example, 20 to 200 C)
- The span is the size of the calibrated operating range (180 C)
- The zero is the measurement value corresponding to minimum signal (20 C set to
produce 4 mA)
Hysteresis relates to different curves depending on whether the value is increasing or
increasing.
Dead space is an area where the output value doesnt change even though the input
changes.

Example/Exercise:
Consider a tank whose temperature is being measured and transmitted. The temperature
is expected to stay between 25C and 100C.
The zero of the transmitter will be set to 25C.
The span of the transmitter will be set to 100C.
Thus, the "calibrated" or operating range of the transmitter will be 25-100C, while the
operating range turns out to be 0-200C.
Next, consider the case that it is realized that the tank temperature can vary to a higher
extent, between 10C and 150C.
The zero will be set to _________.
The span will be set to _________.
The calibrated range will be _________; the instrument range will be _________.

Static behavior of measurement instruments
The different terms characterizing a measurement instrument has been introduced
together with the typical kinds of signals that are usually transmitted in a control system.
The next step is to study how the input signal is related to output signal for a
measurement instrument expressed in the terms used in the previous sections as well as
describing it in an equation form. For simplicity it is assumed that there is a linear
relation between the input and the output, like the case in Figure 9.4, which is the static
behavior of the instrument mentioned in the previous example with an electrical signal
input. This is the general way of imagining the behavior of the instruments and the unit
changing converters discussed in this chapter, with the input to the unit (whether
measurement instrument or a converting transducer) on the x-axis and the output on the
y-axis.


Figure 9.4 Input-Output behavior of a thermometer

Obtaining the equation for the relation between input and output
The relation is a linear one so the equation would be on the form y=mx+k, with m being
the slope and k the y-intersect.
The slope is easily obtained by taking the quotient between the output range (y) and the
input range (x), hence: m= y/x.
This could be followed by choosing a point on the line (giving an x and a y) and inserting
it into the straight line equation and solving it for the unknown value k.
The same result could be obtained by using the following equation, which means
basically choosing the minimum value as the reference point:
( )
min min
y x x
x
y
y +
A
A
=
which factorized would give:
c mx x
x
y
y x
x
y
+ =
|
.
|

\
|
A
A
+
A
A
=
min min
y

Example/Exercise:
Find the equation for the relation of the temperature transmitter in Figure 9.4.
The range of the input: x=x
max
-x
min
=_____-______=_______C
The range of the output: y=y
max
-y
min
=_____-______=_______mA
( )
min min
y x x
x
y
y +
A
A
= y=____/____(x-_____)+_____
or y=mx+c y=______x+______

Dynamic characteristics of measurement instruments
The static characteristics are dealing with the behavior of the instrument at steady state
(i.e. when the reading has stabilized). Dynamic behavior on the other hand deals with the
changes one from one steady state to another, which is important for process control as
its dealing with the changes of the process.
The dynamic behavior of the instrument is very similar to the dynamic behavior of
systems that is covered in chapter 5 as the instruments, just as the systems, can be
described by zeroth-, first- or second-order systems.
The zeroth order instrument basically means that the change is instantaneous so that as
soon as an input change is taking place the reading of the instrument would change at the
same time. For example a potentiometer used to indicate an angle would show a direct
change.
First and second order instruments would be described by first- and second-order
differential equations and hence be described by the same response charts and parameters
as the first- and second-order systems as well as the standard transfer functions
describing the instrument:

( ) 1
0
+
=

s
Ke
G
s t
instrument
t
or
( )( ) 1 1 1 2
2 1
2 2
0 0
+ +
=
+ +
=

s s
Ke
s s
Ke
G
s t s t
instrument
t t ,t t

So terms like time constant, dead time, settling time, rise time, decay ratio and overshoot
is of interest for instruments too.


Chapter 10: Control Valves
The most common final control element in the process control industries is the control
valve. The control valve manipulates a flowing fluid, such as gas, steam, water, or
chemical compounds, to compensate for the load disturbance and keep the regulated
process variable as close as possible to the desired set point.
Most industrial control valves are globe valves, so called because of the shape of their
body. Within the body, the valve stem is moved up and down (strokes) by the actuator.
This opens and closes a gap between the valve plug and the valve seat.
The parts of the valve that come in contact with the process fluid including the valve seat,
plug, etc. are collectively known as the valve trim. Trims are designed and machined to
regulate and "shape" the flow according to a specific characteristic.

Figure 10.1: Control valve taken from the control valve handbook

Valve actuators are usually pneumatic/spring devices, although piston (air on both sides),
electric motor, and hydraulic actuators are also available. The signal from the controller
passes through an I/P transducer (which is not needed in the case of a controller with a
pneumatic control signal) and is converted to a stream of pressurized air. The pressure in
the air pushes against one side of a diaphragm opposing the force caused by the spring to
move the valve stem to the desired position.
If the air signal is lost, i.e. the pressures in the systems become zero, which means that
there is no force opposing the spring. This will lead the spring to move the valve to its
failure position. Depending on whether the spring is above or below the diagphragm
valves can be Fail Closed (a.k.a. Air-to-Open or AO) or Fail Open (Air-to-Close or AC).
The failure position of a valve is a significant safety consideration and is determined
early in the control system design. The two different types of valves are shown in Figure
10.2 where it can be seen that the fail open valve has the cavity for air is above the
diaphragm, so that an increase in pressure would push the valve stem and valve plug
down to close the valve, and hence when the air disappears the spring would work to
open the valve. The other valve, the fail close valve, has the pressure cavity below the
diaphragm, hence a pressure increase would push the diaphragm, and valve stem and
valve plug up opening the valve, while the spring works to close the valve.
The safety considerations in choosing the control valve action (AO or AC) are usually
done through simple reasoning and doing deduction based on that reasoning. This is
easiest illustrated through an example.



Figure 10.2 Valve types. Fail open on the left and Fail closed on the right.

Example:
Specify the suitable control valve action for the following manipulated variables and give
reason(s).
(a) Steam pressure in a reactor heating coil.
(b) Flow of reactants into a polymerization reactor.
(c) Flow of effluent from a wastewater treatment holding tank into a river.
(d) Flow of cooling water to a distillate condenser.
Solution:
(a) Fail close, as overheating systems is generally a safety problem, while little
damage are generally cause by too low temperature.
(b) Fail close would be the common case to avoid overflow of the system, but Fail
open could be considered if the valve would be designed to operate close to a
fully open case, for which fully opening the valve would make little difference.
(c) Fail closed as too much effluent could be discharged else, as well as possibly
discharging untreated effluent.
(d) Fail open to ensure that the distillate is fully condensated.


The valve, actuator and I/P transducer are commonly lumped together and referred to as
the actuator system.
Other types of valves (notably rotary or butterfly valves) are used, but the globe type
control valve described is most common. Other final control elements include furnace
dampers, variable speed drives, etc.
A valve positioner is a pneumatic device which precisely positions a valve. It is
essentially a control system which makes sure that the valve has the appropriate position
for a particular pneumatic signal. For example a 50% signal (9 psig) would mean that its
desired that the valve if half open, if that is not the case the positioner will adjust the
pressure in the valve to obtain a 50% open valve.
Sizing Valves
The type of valve is the first consideration in the choice of valve for safety reasons. The
next step is to make sure that the valve is not undersized which would make it impossible
for the valve to pass the required flow, while oversized valves would cost more than is
necessary make the system slower as well as reducing sensitivity to a control signal..
Neither case permits precise regulation of the process, hence control valve sizing is an
important engineering task.
The valve sizing equation should look familiar -- after all, a valve is just a flow
restriction.
( )

v
v
P
l f C q
A
=
Where q is the volumetric flow rate, C
v
is the valve coefficient, f(l) is the flow
characteristics, l is lift or the valve position, the P
v
is the pressure drop over the valve
and is the density of the liquid. The units would depend units used in the tables used for
the design of the valve but the form in itself would work for any incompressible fluid.
The size coefficient has been separated into two parts -- a constant value (C
v
) that relates
to the maximum flow the valve can pass and the characteristics (f(l)) that describes how
the valve open area varies with stem travel. This function goes to 1.0 when the valve is
wide open and to 0.0 if the valve is completely closed. Most valve manufacturers do not
separate the valve characteristic into parts, but instead tabulate values of C
v
in their
catalogs and software. When selecting a control valve, you first estimate a body size
(equal to or slightly smaller than the line size), a characteristic, and then use the table to
determine which valve will provide the required size coefficient.
Valve Characteristics
The trim of a valve is designed to produce a defined characteristic relationship between
the valve positions (opening) and the flow through the valve. Three common
characteristics are linear, equal percentage, and quick opening with equal percentage the
most common.
A linear valve is the easiest to understand: the flow rate is directly proportional to valve
position. The characteristic function is simple:
f(l)=l
where the valve position is given as a fraction (0 to 1.0).
An equal percentage valve is machined so that an equal increment of travel produces an
equal change in flow (so it will be linear on a semilogrithmic plot). A typical
characteristic function for an equal percentage valve might be:
f(l)=R
l-a


where R is a valve design parameter, usually in the range from 20 to 50.

Finally, the quick opening is designed to open quickly, which is described by the
following characteristic equation.
f(l)=l



The flow characteristics are illustrated in Figure 10.3, while the valve plug designs are
shown in Figure 10.4.

Figure 10.3. Control valve characteristics.

Figure 10.4. Valve plug designs

Inherent vs. Installed Characteristics
From the valve sizing equation it is obvious that valve characteristics and the valve
coefficient are not the only parts that decide the flow through the valve. It is obvious that
the density of the liquid and the pressure drop across the valve plays a role as well
particularly, the pressure drop across the valve. Manufacturers test valves in a rig where
the pressure drop is kept constant, thus the performance they see is the inherent
characteristic of the valve.
In a real plant, pressure drop varies as the flow changes, so the characteristic relationship
seen between travel and flow will not be the same as that seen in the test rig. This
installed characteristic is what really matters to a process engineer.
To understand the difference between inherent and installed characteristics, visualize a
family of linear valves as seen in Figure 10.5.
Notice how each has a different slope and maximum flow.


Figure 10.5. Flow through valve at different pressures
In many real systems, pressure drop increases with the flow, which would lead to the
installed characteristic of a linear valve taking on the looks more like the equal
percentage. Oddly enough, when an inherent equal percentage valve is used in this
situation, the installed characteristic is nearly linear. The general goal is to achieve an
installed characteristic that is as close to linear as possible to make for consistent control,
so equal percentage valves are typically used when pressure drop varies with flow.
The pressure drop in a process system will vary due a number of factors, such as the
friction in lines and other process equipment, the static head of the inlet, the changes due
to pumps, and of course the control valves it selves. When the total flow is low, the
pressure drop over the control valve tends to be a large part of the total pressure loss of
the system, but at high flows this may not be true. A good design will respond well over
the full range of conditions, hence it is important to pick the right characteristic for your
system and size the valve for the right amount of pressure drop.
For good control, it is favorable that a fairly large portion of the overall pressure drop
takes place across the control valve. In that way the valve will have a big influence on the
total system, making the operators and control engineers happy. However, design
engineers will worry that increasing valve pressure drop will tend to increase pumping
and other operating costs and hence a compromise is necessary. As a rule of thumb
design the system and size the valve so that 25% of the total system pressure drop
(including the valve) is taken across the control valve, with a minimum of 10-15 psig.
Example: Pressure drop varying with flow
A pump furnishes a total head of 40 psi over the entire flow rate range of interest. The
heat exchanger pressure drop is 30 psi at 200 gal/min and can be assumed to be
proportional to q. Select the rated C
v
of the valve and plot the installed charecteristics for
the following cases:
(i) A linear valve that is half open at design flow rate.
(ii) An equal percentage valve (R=50) that is sized to be completely open at 110%
of the design flowrate.
(iii) Same as in (ii) except with a C
v
that is 20% higher than the calculated value.
(iv) Same as in (ii) except with a C
v
that is 20% lower than the calculated value.

Figure 10.6. Heat exchanger system


Solution: The information given means that the design conditions are for 200 gal/min
and that the pressure drop across the heat exchanger can be described by:
2
200 30
|
.
|

\
|
=
A q P
HE

or q c P
HE
= A (with c=30/200) .
The maximum pressure drop across the valve is whatever is left of the head created by
the pump, hence 40-P
HE
=P
v
which gives: P
v
= 40- 30/200q.
This will be used together with the design equation; ( )
s
v
v
g
P
l f C q
A
= to determine the
valve coefficient

(a) For linear valve we have f(l)=l and given that q=200 when l=0.5.
P
v
=10 psi
5 . 126
1
10
5 . 0 200 = =
v v
C C but lets use 125, which means the flow is
described as:
200 30 40 125 q l q =

Which would have to be solved to plot l against q, as seen in Figure 10.7.


(b) Equal percentage we have f(l)=50
l-1
and given that q
max
=1.1q
design
when l=1
14 . 114 220 200 30 40 50 220
1 1
= =

v v
C C but lets use 115, ehich
means the flow is described as:
200 30 40 50 115
1
q q
l
=


Which again would have to be solved to plot l against q as seen in Figure 10.7.
(c) C
v
=1.2115=138 200 30 40 50 138
1
q q
l
=


(d) C
v
=0.8115=92 200 30 40 50 92
1
q q
l
=



Figure 10.7. Installed valve characteristics for the example

From Figure 10.7 the problem with linear valve can be seen the flow rate as a function of
the valve position is not a very linear relation. While the equal percentage valves exhibit
exhibits a more linear relation which would make it easier to design a controller to use
the valve as an actuator, while having similar maximum flow rate (220 gal/min).

Valve Sizing/Design Procedure
These design criteria are done under the assumption that the pump has been chosen
appropriately to make sure that the pressure drop over the valve at the design flow rate is
25% of the total pressure drop of the system. If the pressure drop over the valve exhibits a
linear behavior, which is rare, a linear valve could be used, else the equal percentage
valve is commonly used.
The next step is to decide on the valve coefficient, C
v
, that is suitable for the case of
interest, which can be de done by using any of the following three criterias:

1. Size for maximum flow
o Estimate the maximum required flow
o Calculate system pressure drop without the valve
o Choose a valve that will pass the maximum flow when about 90% open
o Check for cavitation, flashing, etc.
2. Size for minimum flow
o Choose a valve that will pass the minimum when about 10% open
3. Size for "normal" flow
o Choose a valve that will pass the normal flow when about 50- 70% open
Example:
Using criteria 1 for the heat exchanger system in the previous example assuming that the
maximum required flow rate is 220 gal/min, when the valve is 90% open (l=0.9) we get:
169 220 200 30 40 50 220
1 9 . 0
= =

v v
C C

This value would be used to find what valve that is needed by looking in tables and
finding the smallest valve that has at least that valve characteristics (C
v
). By using the
Figure 10.8 the choice would be the 4 inch valve as that has C
v
=224, while a smaller
valve would drop the C
v
below the desired characteristics.
It should be noted that this is an oversimplification other variables play a role as well,
which has been neglected in this case.

Figure 10.8. Valve table for the Control valve handbook

You want to maintain valve turndown and operability across the full range of anticipated
conditions. A valve that is closed or wide open cannot modulate the flow, so you should
avoid any plans to operate with the valve "pegged".
You also should note that automatic control valves are not designed to produce "tight
shutoff", and will likely be prone to small amounts of leakage when closed.
Consequently, most control valve installations include block valves (manual valves which
can be turned when complete shutoff is needed).
Cavitation
When liquid flows through a valve, the valve acts as an obstruction (i.e. reduction of the
cross sectional flow area) which will lead to an increase in flow speed. This in turn leads
to a pressure drop (which can be derived from Bernoullis equation) within the valve.
If the minimum pressure of the flowing fluid falls below the vapor pressure, as seen in
Figure 10.9, vapour bubbles form, which can cause problems.

Figure 10.9. Pressure drop across valve, pressure dropping below vapour pressure


In the case when the exit pressure remains below the vapor pressure with get a flash
vaporization which can lead to choking, were bubbles choke the valve so that changes in
the drop of pressure across the valve no longer affect the flow rate.
In the case when the exit pressure comes back above the vapor pressure, the vapor
bubbles collapse in tiny detonations, producing noise, vibration, and more problematic
physical damage of the pipe surfaces. This cavitation can be very destructive.
Control valves should always be designed to avoid cavitation and flashing. In extreme
cases, it may be necessary to use multiple control valves, in series, to avoid that the
pressure ever drops below the vapour pressure as seen in Figure 10.10

Figure 10.10. Pressure drop across multiple control valves

Control valve transfer function
The gain of the valve, as for other devices, is the steady state change between output and
input. The input for the valve is commonly a pneumatic signal coming from the
controller, but to make it easier to deal with % control output (%CO) is used instead,
where the signal varies between 0% and 100%. The output for the valve is the flow
though the valve, which gives the following equation:

CO
f lowrate
du
dq
K
v
%
=
Using the flow rate of interest. Applying the chain rule of differentiation the following
relation is obtained:

|
|
.
|

\
|
|
|
.
|

\
|
|
.
|

\
|
=
v
v
v
dC
dq
dvp
dC
du
dvp
K
The valve position is commonly a linear relation between the control signal and valve
position, where the valve position varies between 0 (closed) and 1 (fully opened) giving
the following relation:

range input
range position
du
dvp

% 100
1
=
|
.
|

\
|

The second part, the valve coefficients dependence on the valve position gives for linear
valves:

max , v
v
C
vp d
C d
=
|
|
.
|

\
|

The last term, being the relation between flow rate and valve coefficient, the installed
characteristics (like pressure drops over other equipment) have to be taken into
consideration. For the simplest case when there is a constant pressure drop over the valve
the following relation is obtained:

s
v v
v
g
p
or
p
dC
dq A A
=
|
|
.
|

\
|


Giving the following equation to determine the steady state gain:

% 100 % 100
1
max
max ,
q p
C K
v
v v
=
A
=



A control valve is usually modeled as a first order system:
( )
1 +
=
s
K
s G
v
v
v
t

were the time constant can be obtained by studying the time it takes to change the valve
position. The valve actuator (hence the time constant) is commonly in order of magnitude
of just a couple of seconds. In many chemical engineering processes the other time
constants are commonly in the order of minutes, hence the time constant of the valve is
so small in comparison to the other that its commonly neglected:
( )
v v
K s G =
Chapter 11: Feedback controllers

This section will deal with the most important type of industrial controllers introducing
the different algorithms for different types of analog controllers. These analog controllers
are designed as standalone controllers that will control just one system (and in this course
just one single variable) without taking into consideration what is happening outside the
system.
In chapter 1 the task of making a shower feel comfortable were discussed, which is a
simple example of feedback control. However that is a manual case of feedback control,
which might work well in a home scale but would not be practical on an industrial scale.
The action of the controller in making the shower comfortable (or whatever control
action we are interested in) has to be mimicked automatically by an automatic controller
carrying out the following steps:

1. Compare the process signal, which is received from a transmitter, of the
controlled variable to the set point (or reference value).
2. Calculate the appropriate control action using an algorithm programmed into the
control unit by the control engineer.
3. Sends, an appropriate signal to the actuator, which will influence the system to
bring, or keep, the controlled variable to/at its set point.

The controllers have multitude of different settings, like the adjustment of the set point,
displaying the value of the controlled variable/manipulated variable, switching between
manual mode and automatic mode and the manual adjustment of the manipulated
variable. The automatic/manual control mode means that an operator can choose to adjust
a manipulator based on intuition, experience, etc, which would remove the effect the set
point has to the output would only depend on the manually set manipulated variable.
Such an approach is inconsistent and not practical in a situation where there are many
control loops.

Before actually studying how to decide on the algorithm for the controller, the controller
action has to be decided, because if the action of the controller is not selected
appropriately the controller wont be able to perform its desired task, i.e. it wont control.
Returning once again to the shower problem, but now assuming that an automatic control
of the temperature is desired and a thermometer have been installed.
Assuming that the temperature of the water in the shower (the controlled variable) is
increasing, the controller must take action to decrease the heat input to be able to bring
the controlled variable towards the set point. Assume that the heat supply from the heater
is a linear function of the control signal, i.e. the heat increase with increase in control
signal. This means for an increase in the controlled variable the controller must decrease
the control signal, which is referred to as reverse action, or increase/decrease.
Another method to try to control the temperature would be to change the flow rate. So for
an increase in temperature the flow through the system would have to increase to reduce
the temperature. To choose the right control action the type of valve would have to be
known, and in this case it is assumed to be an air-to-open valve. To open the air-to-open
valve more, the pneumatic signal to the valve have to increase as well hence the control
output must increase, which is referred to as direct action or increase/increase as the
control action for an increase in controlled variable is an increase in the manipulated
variable.
This means to design the controller appropriately the following things must be
determined;
1. The process requirement for control (i.e. how the manipulated variable has to be
changed to move the controlled variable to the set point)
2. The action of the control valve or any other actuator/control element (as if a air-
to-close valve was used it would be reverse acting and hence produce a reversal of
the overall action).

Types of Feedback controllers
In feedback control as specified in Figure 11.1, the control input is, e the control error,
which in a way means that the task of the controller is to make the control error zero,
which is specified as the difference between the set point r and the controlled variable y:

e(t)=r(t)-y(t) or e(t)=y
sp
(t)-y
m
(t)

where y
sp
is the set point (i.e the equivalent of r) and y
m
is the measured value of the
controlled variable which is obtained through a sensor-transmitter unit.

Figure 11.1. Feedback control loop

On-Off Control
The simplest control algorithm is on-off control. This approach is employed by most
home thermostats and in freezes and fridges. It works by having one state when the
control error is positive and another if the control error is positive, which can be
expressed mathematically as:
( )

<
>
=
0
0
2
1
e when u
e when u
t u
where u
1
and u
2
could either be on or off respectively depending on the action of the
system.

Example: Electric heating system
A heating system is installed to make sure that the temperature doesnt drop below 25 C,
if the temperature drops below 25 C it is desired to turn the heater on to make the room
warmer. When the temperature drops the control error becomes positive, hence u
1
should
be on and hence u
2
is off, switching off the heating when the temperature is above 25 C.
Another name for this controller is bang-bang control it goes bang (on) and later bang
(off) in a non-smooth manner. The bang-bang action highlights one of the problems with
this type of controller as it might start oscillating around the set point, switching back and
forth with small intervals. To reduce the switching a dead band can be included to create
an area of no action. For the temperature case, it could be made so that the heater doesnt
switch on until the temperature drops to 24.5 C and keep heating until 25.5 C, which
would create longer gaps between the switching actions.

Proportional Control (P-only Control)
On-off control takes exactly the same action for small errors as it does for huge ones. The
resulting cycling might be reduced if the controller was able to make big changes when
error is big, but small changes when error is small, which is the case with a proportional
controller:
u(t) = + K
c
e(t) or when using deviation variables u(t) = K
c
e(t)


is the bias (also p
0
, m
ss
or c
0
) is the desired constant value when error is zero. Usually
the bias is the nominal steady-state value of the signal to the final element. If it is not
determined in advance (from modeling) it is often initially set at mid-scale, (50% CO or
12 mA).
K
c
is the controller gain (CO%/TO% for dimensionless) and a unity gain (K
c
=1) means
that a 10% change in error will produce a 10% change in controller output.
Dimensionless gains are the preferred approach whenever dealing with control signals;
but when analyzing loops process variables may be used instead of signals, hence gains
will not be dimensionless.

Continuing from the deviation variables a transfer function for the controller can be
obtained as:
G
c
=U(s)/E(s) from U(s)=K
c
E(s)
The tuning (choice) of the magnitude and the sign of the gain decides the sensitivity and
action of the controller respectively. The primary effect of increasing K
c
is faster
response. The action is categorized into two groups direct and reverse acting. These
terms can be defined it two different ways (and are, so different books/engineers might
use different definitions). The definition used in this course is the relation between
controlled variable (y(t) or c(t)) and the manipulated variable (controller output) (u(t) or
m(t)). As normal direct acting is defined as taking place when an increase in y(t) leads to
an increase in u(t). Increase of y(t) means that e(t) must be decreasing (assuming that r(t)
is constant). So to achieve direct action K
c
must be negative, following the steps that an
increase in y(t) give a decrease in e(t), changing the sign with K
c
to get an increase in
u(t). As a direct consequence a positive K
c
will give a reverse acting controller.

A P-only controller won't necessarily drive a process back to perfect set point (i.e. a zero
control error). The permanent error which results is called "offset". To highlight the
behavior of the offset as well as the speed of the system lets turn to an example.

Example:
For a system like in Figure 11.1, the G
v
=1 and G=b/[s(s+a)] where a and b are positive (a
second order process with a pole on the imaginary axis).
For a case like this the overall transfer function is:

( )
( )
( )
( ) s G K
s G K
G G
G G
s R
s Y
c
c
c
c
+
=
+
=
1 1

which with the actual transfer functions can be simplified:

( )
( )
( )
( )
1 2
1
2 2 2
+ +

+ +
=
+
+
+
=
s s
K
bK as s
bK
a s s
b
K
a s s
b
K
s R
s Y
c
c
c
c
,t t

and hence the gain, characteristic time and dampening ratio can be determined:

c
c
bK
a
and K
bK
2
1 ,
1
2
= = = , t
The tuning parameter (the controller gain K
c
) is having an inverse effect of the
dampening ratio. From studying the second order system, in Chapter 6, it is known that
with a smaller dampening ratio a more oscillatory behavior is obtained, but the system
will also be faster.
This means that increasing the controller gain would create a faster response, but at the
prize of a more oscillatory behavior, possibly even leading to stability problems.

The offset effect can be studied in a simpler system; where G
v
=1 and G=b/(s+a) which
gives the following transfer function:

( )
( )
( )
( )
( )
b K a b K a
b K
K
s
K
b K a s
b K
a s
b
K
a s
b
K
s R
s Y
c c
c
c
c
c
c
+
=
+
=
+
=
+ +
=
+
+
+
=
1
,
1
1
t
t

For a step change of magnitude z the following obtained
( )
|
.
|

\
|
+

A =
A
+
=
1
1
1 s s
z K
s
z
s
K
s Y
t
t
t

Carrying out the inverse Laplace transform we get the behaviour of the system as:
( )
|
.
|

\
|
A =

t
t
t
e z K t y 1
This means that when the set point is changing by z the output is changing by Kz
which means that we get a final error (the offset):
( ) ( )
|
|
.
|

\
|
+
A =
b K a
b K
or z K e
c
c
1 1
Consequently, offset decreases as K
c
increases. But if increased too much stability
problems will commonly arise. This is shown in Figure 11.2 where a setpoint change is
made from 25 to 30 but controller doesnt manage to bring the system to 30, but it has got
closer by the increase of the controller gain. It can also be noted that it also created an
overshoot on the case with the higher gain, which is the first sign that the system is
moving towards a possible instability.

Figure 11.2. P- control for two different controller gains.

Integral Control (Reset)
In some cases it might actually be OK to have a bit of an offset, but in many cases, it is
preferred to completely eliminate offset, so need a way to penalize persistent errors,
which can be done using the integral of the error rather than the instantaneous error,
giving rise to the following controller equation:
( ) ( )
}
+ =
t
c
dx x e K u t u
0
and in deviation variables ( ) ( )
}
=
t
c
dx x e K t u
0

giving the following transfer function:

( )
( ) s
K
s R
s Y
c
= commonly expressed as
( )
( ) s s R
s Y
i
t
1
=
This will eliminate the offset, since the integral will keep increasing as long as there is an
error, which means that the controller output will keep increasing until the error is zero.
As can be seen from Figure 11.3 it removes the error, but at the cost of a very slow
response, since it won't act until the error has begun to accumulate (when the error is
summed up by the integral).
Another problem with Integral action makes a loop more oscillatory, which is also due to
the summing up effect of the integral. If a change is creating a positive error we keep
summing the errors up as the controller acts (5+5+3+1+0). This has brought the system to
a zero error case, but the integrated error is 14, so the controller will keep acting in the
same way, hence creating a negative error. Of course the negative error will in the end
make the summed error zero, but at that point a negative error is obtained instead
(5+5+3+1+0-2-2-3-3-2-2). This can be seen as starting from -2 and then repeating the
process again as seen in figure 11.4.

Figure 11.4 Oscillatory behavior of an I-controller

When a fixed error, which may be caused by calibration problems, valve saturation,
overrides, etc. continues for a long time, the integrator can stick at a large value. This
usually causes the control element to stick at its minimum or maximum point (for a valve
fully closed or fully open). The effect is referred to as "reset windup". Most controllers
have an "anti-reset windup" feature, where the error is calculated as the difference
between the set point and the controller output rather than the set point and the process
measurement, at least for the integral control calculations.

Proportional-Integral Controller (PI)
Combining the proportional controller, which has a reasonable fast action, with an
integral controller, which removes the offset, is a commonly used controller. The PI
controller can be described by the following equation:
( ) ( )
}
+ + =
t
i p
dx x e K K u t u
0

Utilizing deviation variables the following transfer function is obtained:

( )
( )
|
|
.
|

\
|
+ = + =
s
K
s
K
K
s R
s Y
i
c
i
p
t
1
1
where K
c
=K
p
and
i
=K
c
/K
i


Example:
To study the effects of the proportional and integral parts when combined like this its
assumed that G
v
=1 and G=b/(s+a) where a and b are positive, which gives the transfer
function:

( )
( )
( )
( ) ( )
i p
i p
c
c
bK s bK a s
bK s bK
s G G
s G G
s R
s Y
+ + +
+
=
+
=
2
1

This is also a second order system, which means the characteristic time and dampening
ratio can be obtained:

i
p
i
bK
bK a
and
bK 2

1
2
+
= = , t
To obtain as good control as possible it would be desired to have as small as possible
while having positive and as big as possible. So to obtain a fast system it would be
desired to K
i
big (or
i
small), which would make small and possibly an unstable
system. But the dampening ratio has two variables in it, which means the effect of
making the dampening ratio too small may be countered by increasing K
p
. This is another
reason why PI-controllers are preferred to I-controllers; they can be made less oscillatory
and hence more stable as the proportional part has a stabilizing property in this case.

Derivative effect (rate)
The PI controller is affected by the magnitude (P) and persistence (I) of the error. There
are times when it would be good to have the controller to take action based on how
quickly the error is changing. As the derivative is a measurement of change the derivative
effect was introduced in control:
( )
( )
dt
t de
K u t u
d
+ =
This is the algorithm for ideal derivative action. Looking at it, you should suspect that it
will not be easy to implement and A "D only" controller would be silly as it wouldn't take
any action unless the error wasn't changing so its never used by itself, even so derivative
action is just used in around 10% of the controllers.
The major reasons for this is that it tends to amplify signal noise (as it keeps oscillating
quickly they have big and changing derivatives), but even without noise it may create
rapid changes in the actuator, which cause damage to sensitive actuators.
So the derivative action is only used when it is important to reduce oscillation as it has a
stabilizing property.
A sudden change in setpoint, and hence in error, can make the derivative function
suddenly very large. This "derivative kick" can be bad for a process, so most commercial
controllers feed the value of the measurement to the derivative part rather than the error,
which would give a control equation of the form:
( )
( )
dt
t dy
K u t u
m
d
+ =



PD controller
There would however be totally useless to try to use a derivative controller by itself as it
would just take action when there are changes in the control error, hence as long as the
control error is constant the derivative action wont do anything no matter how big the
error actually is.
To get practical controller utilizing the derivative action it is combined with a
proportional controller to give:
( )
( )
dt
t de
K K u t u
d p
+ + =
This is the same as was done to the PI-controller and just as then deviation variables are
utilized to get the following transfer function for the PD-controller:

( )
( )
( )
p
d
D p c
D c d p
K
K
K K where
s K s K K
s R
s Y
= =
+ = + =
t
t 1


Example:
To study how the proportional part and derivative part affect the behavior of the system
the following case is assumed; Gv=1 and ( )
2
a s
b
s G
+
= where a and b are positive,
which gives the transfer function:
( )
( )
( ) ( )
( ) ( )
( )
( )
( )
( )
( )
p d
p d
d p
d p
c
c
bK s bK a s
bK s bK
s K K
s a s
b
s K K
s a s
b
s G s G
s G s G
s R
s Y
+ + +
+
=
+
|
|
.
|

\
|
+
+
+
|
|
.
|

\
|
+
=
+
=
2
1
1

Which again is a second order system, which means extracting the characteristic time and
the dampening ratio will show how the parameters of the controller affect the stability
and speed of the controller.
From tionalhe characteristic time, an increase in the propor

PID Control (Proportional-Integral-Derivative, Three Mode)
The three mode controller utilizes that all three modes of control presented in this chapter
to enable the controller to be affected by the magnitude (P), persistence (I) and rate of
change (D) of the error. Taken the different controllers summed (in parallel) together,
while utilizing deviation variables the following description is obtained;
( ) ( )
( )
( )
( )
|
|
.
|

\
|
+ + = + + =
} }
dt
t de
x e K
dt
t de
K x e K K t u
d
t
i
c d
t
i p
t
t
0 0
1
1
Giving the following transfer function:
( )
( )
( )
|
|
.
|

\
|
+ + = + + = = s
s
K s K
s
K
K
s E
s U
s G
d
i
c d
i
p c
t
t
1
1
These transfer functions are commonly changed in the way mentioned in the previous
sections, to avoid the effects of reset windup for the integral part and derivative kick
for the derivative part. So the actual look of the transfer function can vary from case to
case, but the transfer function above can be considered the standard form. Even so the
implementation of the standard form has its drawbacks and is commonly modified to
make the derivative controller more robust towards measurement noise.
( )
|
|
.
|

\
|
+
+ + =
1
1
1
s
s
s
K s G
d
d
i
c c
ot
t
t


Lec 14: Stability
A common consequence of feedback controllers are oscillatory responses as was shown
in the previous chapter and if the oscillations disappear reasonably quickly the controller
would be considered satisfactory in a stability perspective, while underdamped behavior
or a runaway process would be an unstable case which should be avoided.

An unconstrained linear system is said to be:
Stable if the output of the system is bounded for all bounded inputs (the natural
response approaches zero as time approaches infinity)
Unstable if the natural response grows without bound as time approaches infinity;
Marginally stable if the natural response neither decays nor grows without bound, but
remains constant or oscillates as time approaches infinity.

The goal of any control engineer is to produce a controller that exhibits a stable behavior.

Stability as a function of the location of the poles
Any transfer function can be described on the following form (also discussed in chapter
4):

) ( ) )( (
) (
2 1
0
1
n n
m
a m
m
m
r s r s r s a
b s b s b
s G

+ + +
=


which can be factorized to get:

n
n
r s
A
r s
A
r s
A
s G

+ +

=
2
2
1
1
) ( or +
+
+

=
i i
i s
A
i s
A
s G
e e
1
2
1
1
) (
Giving rise to the following behavior in the time domain;

t r
n
t r t r
n
e A e A e A t y + + + =
2 1
2 1
) (
for the real case and;
+ + = + + = ) sin cos ( sin cos ) ( t C t B e t Ce t Be t y
t t t
e e e e


for the case with complex poles.
By studying the behavior of the systems, y(t), it should be obvious that any root must be
negative (r
i
<0, i) or have a negative real part (
i
<0, i) to decrease with time. These
requirements can be stated as:
When all the poles of the system are in the left half-plane (have negative real parts),
then the system is stable.
When a system has at least one pole of the system are in the right half-plane (have
positive real parts), then the system is unstable.
When all poles of the system are only in the left half plane or on the imaginary axis, and
all the poles on the imaginary axis are of multiplicity one, then the system is marginally
stable (i.e. have bounded continuing oscillations)
When a system has poles of multiplicity greater than one on the imaginary axis, then the
system is unstable.

Figure 12.1 Simplified feedback diagram.

The characteristic equation
This holds for any part of the system and whether it includes a controller or not. For a
feedback control system as seen in Figure 12.1 the following transfer function is
obtained:
( ) ( ) ( ) s D
G G
G
s R
G G
G G
s Y
p c
d
p c
p c
+
+
+
=
1 1

where G
c
G
p
can be substituted using the open loop transfer function definition: L(s).
The previous section showed that the part in the transfer function deciding the stability is
the denominator; hence the only part that has to be studied is 1+L(s). Studying the poles
of a feedback loop is the same thing as studying the roots of the following equation:
1+L(s)=0

This equation is the characteristic equation of a closed loop system and will be used as
soon as we want to study the stability considerations of a system.

Example:
Determine the stability region (i.e. where the controller is stable) for a process described
by G
p
=1/[(2s+1)(5s+1)] and with a P-controller: G
c
=K
c
.
This gives the following characteristic equation: 10s+7s+(1+K
c
)=0.
To determine the stability the roots of the characteristic equation are found:

10
1
35 . 0 35 . 0
2
2 , 1
c
K
r
+
=
The subtraction is sure to render a negative root, so the deciding one is the addition,
which gives the following requirement:

1
0
10
1
35 . 0
10
1
35 . 0 0
10
1
35 . 0 35 . 0
2 2
>
>
+

<
+
<
+
+
c
c
c c
K
K
K K

Exercise:
Determine the stability region (i.e. where the controller is stable) for a process described
by G
p
=0.2/(-s+1) [this is a unstable process, which we will try to stabilize by adding a
controller] and with a P-controller: G
c
=K
c
.
The characteristic equation 1+G
p
G
c
=0 is: -s+1+0.2 K
c
=0
The root of the characteristic equation is:________________
As the root has to be negative we have that K
c
must be less than:______

A necessary condition for stability
Finding the roots of the characteristic equation and choosing K
c
in such a way to make
those negative or at least have a negative real part will be feasible if the characteristic is
of first or second order, but if they are more complicated another way of determining
stability is needed.

Stability is obtained if all the poles are in the left half-plane, the denominator can be
written as (s+a
1
) (s+a
2
) (s+a
n
) where a
i
is either real and positive or complex with a
positive real part, following from a
i
=-r
i
>0 or a
i
=-
i
i where -
i
>0.
The product of the terms of the denominator would be:
s+(a
1
+a
2
++ a
n
)s
n-1
++ a
1
a
2
a
n

which is a polynomial with all positive coefficients.

Hence; for a system to be stable, then all the coefficients of the denominator must be
positive (i.e. the zeros of the denominator should be negative).

It means that if any of the coefficients of the denominator polynomial is negative or
missing, then the system is not stable.

Note 1: This is just a necessary condition as all the coefficients can be positive without all
of the poles being in the negative half-plane.

Note 2: There is one exception to this rule, and that is if all the coefficients are negative,
in which case the equation could be multiplied by -1, to become positive, but maintain
the same roots, hence a system with all coefficients negative would also fulfill the
necessary condition for stability.

Routh-Hurwitz Criterion
The previous section just gave a first tool to determine stability, (much like a coarse
sieve). A more accurate tool is needed (i.e. a finer mesh in the sieve). The Routh-Hurwitz
Criterion (or Routh stability criterion or Routh-Hurwitz stability criterion) is based on a
method determines the number of poles in the right hand plane of an equation of form:

a
n
s
n
+a
n-1
s
n-1
+.+a
1
s+a
0
=0

As has already been demonstrated a pole (zero for characteristic equation) in the right
hand plane means that the system is unstable, which is the foundation of the Routh-
Hurwitz criterion.
Assuming that the necessary conditions is applied all coefficients are positive (a
j
>0
je[1,n]) the next step is to produce the Routh array also referred to as the Routh Table as
seen in Table 12.1.

Table 12.1. The Routh array
0 0 0
0 0
0
1
4
3
2
1
1
2 1
3 2 1
5 3 1
4 2
z
c c
b b b
a a a
a a a
n
Row
n n n
n n n



+



As seen the coefficient connected to the highest power is put at the first row and first
column, the next entry in the first row is the coefficient for the third highest power, hence
the one coefficient is skipped. This procedure is repeated with decreasing power, taking
alternating coefficients and writing them in the first row.
When the first row is completed the attention is turned to the second row where the
coefficients that were neglected for the first row is put in the second row, again going
from the highest power to the lowest power.
To fill in all the remaining rows its a matter using the following formula:

1
3 2 1
1



=
n
n n n n
a
a a a a
b
Which as seen is a matter of;
- taking the product of the entry just above the space to be filled and the entry in the
row above and in the column to the right of the space to be filled,
- followed by subtracting the product of the entries that were neglected in the rows
and columns used for the first entry, and
- finally divide by the first column entry in the row above the space to be filled.
This is repeated for all entries in row 3, hence the next column entry is defined by:

1
5 4 1
2



=
n
n n n n
a
a a a a
b
The procedure for a row is stopped when the last coefficient is reached, a
0
, as all
following entries in the array would be zero.
Then the same procedure is applied to row 4, the first entry being defined by:

1
2 1 3 1
1
b
b a a b
c
n n

=
The table is completed when row n+1 (n being the order of the characteristic equation) is
reached, which is the case when a subtraction of 0-0 will ensue for the next row.


Example 1: Creating a Routh table. Consider a feedback system in figure 3.

Stability criterion
For a system to be stable, all the coefficients in the left column of the Routh array have to
be positive.

Example #1: Controller Gain Design using the Routh Array
This first example shows how the Routh Array can be used to determine a range of stable
system gains.
When the closed loop characteristic equation is symbolic such as


The stability condition is for all values in the 1
st
column to be positive. From this array it
is obvious that leads to
Direct Substitution
The direct substitution method can give both the ultimate values of the controller settings
and the period of oscillation at the ultimate settings. It can also be applied to systems with
dead time without having to make any approximations to the exponential function term.
The procedure is to substitute s = j (where j is the imaginary unit ) and set the real &
imaginary parts to zero. These expressions will give the ultimate values for (the
frequency of oscillation at this stability limit) and the associated controller settings.

Example #1 by Direct Substitution
For the previous example the characteristic equation was:
1.875s+4.625s+3.75s+1+0.004Ks+0.004K=0






Lec 14: Direct synthesis

Direct Synthesis Controller Tuning
Direct synthesis methods are based upon prescribing a desired form for the systems
response and then finding a controller strategy & parameters to give that response. Our
focus can either be on rejecting set point disturbances or load disturbances.


For the feedback control loop above the overall transfer functions between the
output and the set point & disturbance are:

Note that for the set point transfer function, we can manipulate it to give:

One implication is that if we pick a desired form for the response to a set point
change, , then we have set out the desired form for the controller. For
example, we might think that it would be great to have the output to immediately
track the set point change, i.e., . However, doing this would require an
infinite gain in the controller:

A more practical response would be a first order decay into the final value, or:

If we require this type of response for a step change disturbance then we get a
controller strategy with the form:

Notice that this shows there is an integral action to the Direct Synthesis controller
strategy.

First Order Process
Lets look at the Direct Synthesis controller strategy for a first order process:

Then the controller strategy is:

Notice that this is simply PI control with the settings:


First Order Process with Dead Time
Applying the Direct Synthesis procedure to a process with dead time will require
some type of approximation to the dead time term to be able to end up with a
controller strategy in a PID form. Lets also look at the Direct Synthesis controller
strategy applied to an FOPDT process. For this process the transfer function is:

Now it makes more sense to require that the response to a set point disturbance
should also have a time delay that matches the processs time delay:

Now the controller strategy implied by Direct Synthesis is:

and for our FOPDT process:


We still have PI control but now the parameters are:


and this is now in the form of a PID controller with the settings:

Summary of Controller Settings Using Direct Synthesis for Rejection of Set Point
Disturbances


Lec 16: Open tuning
Ziegler-Nichols Controller Tuning
The Ziegler-Nichols method is based upon closed-loop response instead of an open-loop
response. However, it is based upon frequency response methods (which we will not be
able to cover this semester). The ZN method is more widely used in the industry.
It is not often possible to do an open loop step test on a loop. Note that the Z-N
settings are very aggressive. Probably the best thing you can do for your class is
set them down with a simulation and let them tune loops.
The steps are as follows:
Bring system to steady state operation.
Put on P control. Introduce a set point change and vary gain until system oscillates
continuously. This frequency is CO and M is the amplitude ratio.
Compute the following:


These controller settings were developed to give a decay ratio. However, other
settings have been recommended that are closer to critically damped control (so
that oscillations do not propagate downstream). PI & PID controller settings
suggested by Tyreus & Luyben are shown in the following table.


Example

Lets look at the above process as an example of the response to Ziegler-Nichols
tuning parameters. The following table will give the process parameters to be used.


Using a trial-and-error procedure the ultimate gain (that leads to stable oscillations)
is about 26.3 (see the following response curve). The period of oscillation is
estimated to 5.0 (as calculated from peak to peak).



This figure shows three different response curves to unit step load disturbances.
The first
is for the original PI control (K c = 12.0 & i = 4.2 ), the second is for the Tyreus-
Luyben settings (K c = 8.2 & i =11.4 ), & the third is for the Rule-of-Thumb
adjustment (K c = 13.15 & i = 5.1). Notice that there is little difference between the
response curves from the Zeigler-Nichols & Rule-of-Thumb settings. Further notice
that though there is no oscillation with the Tyreus-Luyben settings it has a much
greater maximum deviation & a much slower response time.


Additional information Controller Tuning
Controller tuning is the process by which a control engineer or technician selects values
of user-adjustable controller parameters (for a PID controller these are the bias, gain,
integral time, and derivative time) so that the closed loop dynamic response behaves as
desired.
Loop tunings are the primary point of contact between an operations/manufacturing
engineer and the plant control system. Controller settings determine the system response:
a poorly tuned controller may be as bad as no controller at all.
Tuning is a exercise in compromise. Controller objectives, specifications, requirements,
and performance always conflict to some degree or another. There are rarely absolute
criteria for selecting tunings and so judgement is required.
As you prepare to tune a loop, you must consider a range of concerns and objectives.
Objectives
All control loops are fundamentally concerned with two objectives: disturbance rejection
and setpoint tracking. In the CPI, disturbance rejection is normally the more important
concern (despite what the examples in control textbooks may suggest).
An important secondary objective is to minimize the "cost" and variability of your
manipulated variables.
Forecasting
Before tuning, you need to have some idea of what to expect. In particular, you want to
have an idea of what sort of inputs are likely -- step changes? ramps? impulses? -- and
how big they are likely to be. A "tight" tuning designed for small inputs may be the exact
opposite of what one would do for a large input.
You also need to understand your system. How much noise do you anticipate? What
constraints do safety, the environment, and equipment protection impose on your plans?
What constraints do nearby units and equipment impose?
Most methods for obtaining initial tuning sections are based on some sort of model. What
type of model are you using? Can you quantify the amount of plant/model mismatch?
Specifications
You can't tune a loop unless you have some way of deciding whether or not it is
"working". Consequently, you'll need to determine how you will measure success. The
specifications used depend on the process, but might include:
1. Speed of response
o Rise time
o Time to first peak
o Settling time
2. Oscillation
o Closed loop damping coefficient (0.4 is a common target)
o Overshoot
o Decay ratio (1/4 is common)
o Frequency or period of oscillation
Loop specifications and performance often interact and conflict. For instance, adding
integral action eliminates offset, but tends to slow response time.
General Performance Measures
Sometimes it is useful to use broad measures of performance that focus less on the
specifics of the loop than on the general variability and deviation from desired
performance. These types of criteria are particularly important in organizations that
attempt to measure "quality" and employ statistical quality control techniques. SQC
techniques are primarily designed to reduce and eliminate variability.
The error in a control loop is usually defined as the deviation from setpoint. There are a
variety of ways of quantifying the cumulative error:
- Integral Error. The cumulative sum of the error. Specifying an IE value will not
ensure a particular type of damping.

- Integral Absolute Error. The sum of areas above and below the setpoint, this
penalizes all errors equally regardless of direction.

- Integral Squared Error. Penalizes large errors more than small.

- Integral Time Weighted Absolute Error. Penalizes persistant errors.

- Integral Time Squared Error.

Evaluating Response Performance
Engineers and technicians must be able to evaluate the performance of control loops.
Quantitative measures of performance are needed. These measures can also be used to
measure "good" response in order to select the tuning parameters of controllers.
What is "good response"? The answer naturally varies depending on the function and
objectives of the system. In general, though, a good response will display as many of
these characteristics as possible:
1. Response has constant size and magnitude
2. Stable
3. Fast
4. Maximum disturbance rejection
5. Minimum delay
6. No offset
7. Limited control action (manipulation cost)
8. "Robust" -- insensitive to process changes and model mismatch
Usually, it is not possible to satisfy all these points equally, but understanding of the
process should help you decide which get priority.
Time Domain Performance Specifications
Most performance specifications are based upon an underdamped response, since most
common processes under feedback control show underdamped behavior.

Speed of Response
Several values can be used to examine the speed of response.
- Rise Time (t
r
) The time for the process to first cross it's new steady state value.
- Time to First Peak (t
p
) The time for the process to reach its maximum value.
- Settling Time (t
s
, a.k.a. Response Time) The time required for the process to
become "nearly constant", that is the time required for the output to reach and
remain inside a fixed error band about the steady state. We will use a band of 5
percent, although sometimes 1 and 3 percent are also used. The settling time may
also be described as the "95% response time", etc.
Acceptable Oscillation
Various measures are available for the extent of oscillation.
- Damping coefficient Often in tuning controllers, a target damping coefficient of
0.4 is used.
- Overshoot The fraction of the final steady state change by which the first peak
exceeds that change. Expressed as a ratio or as a percent, given by

- Decay Ratio The ratio by which the oscillation is reduced during one complete
cycle, or the ratio of successive peak heights. A "one quarter" decay ratio is a
traditional standard. The ratio can be calculated from

- Period (T) (or frequency (f)) of oscillation. The time between successive peaks.
The frequency is the reciprocal of the period. (Note that control calculations often
require frequency in terms of radians/second, not in Hz.)

When the system is undamped, it oscillates without attenuation. Under these
circumstances, the natural frequency of the system. Consequently,

Performance Specifications -- Second Order Step Response
If a particular system is specified, many of the performance specification values can be
determined analytically.
Since the majority of process systems can be approximated by a first order model, a
second order underdamped system is probably the most commonly analyzed closed-loop
response. With that in mind, consider the second order underdamped system given by

forced by a unit step input

to obtain the time response

The formulas which follow are derived for this case only.
Speed of Response
The period can be shown to be

The frequency is

The rise time will be about one-fourth of the period, or can be found from

The settling time for a 1% limit will be

Acceptable Oscillation
Two formulas are available for calculating the overshoot:

Choices are also available for finding the decay ratio:

Both the overshoot and the decay ratio are readily measured and calculated from an
output response plot, the formulas can be used with the measured value to determine an
estimated value of the damping coefficient.
These expressions are useful -- but remember! they only apply to a true second order step
response with defined initial conditions.
Root Locus Methods
We've seen how the stability and response of a system depends on its poles. We've also
seen how a pole-zero plot can help us visualize the system behavior.
A pole-zero plot is simply a plot of the open-loop poles in the complex plane. A plot of
the closed-loop poles can be similarly helpful. Since the closed-loop poles depend on the
controller parameters, we don't get single points; instead, we get curves showing the pole
position as a function of controller gain. Such plots are called Root Locus Plots.
Quickly sketched root locus plots can be made using a little bit of algebra and following a
few basic rules. It usually isn't necessary to have exact numerical values. The plots that
result provide some very useful qualitative understanding of the closed loop response.
An Initial Example
Consider a feedback control loop with a forward patch process transfer function G
P

(actuator, valve, process, etc.) and a return path transfer function G
R
(measurement) given
by:

When a proportional only controller is added, the open loop transfer function for this
system becomes

The closed-loop characteristic equation (CLCE) of the system is then:

which corresponds to the general pole/zero form equation

When K=0, this is the open loop transfer function and the poles are easily plotted on the
complex plane to obtain a pole-zero plot. To make a root-locus plot, we just pick several
values of K and replot the poles for each. The result will be a set of curves, each
beginning at an open loop pole.
Look at the plot that results from the example.

- The system has 3 poles, and the root locus plot has 3 branches.
- Each branch begins at an open loop pole (the "X"s).
- The plot is symmetrical. Complex roots of equations always appear as conjugate
pairs.
- The branches (roots of the CLCE) stay in the LHP as long as a certain critical
value of the gain is not exceeded. That gain is thus a stability limit.
- The branches follow the real axis (roots are real) for gains less than another
trigger value. For gains larger than that, the response will be oscillatory.
Root locus plots are calculated by solving a complex valued polynomial equation -- but it
isn't really necessary to do the math. The beauty of the root locus method is that RL plots
can be sketched by following a set of simple rules that require only a little algebra.
Root Locus Plotting Rules
See handout
Rework the example using the rules.
Other Examples
Next, we'll look at some example systems to see what some typical root locus plots look
like.
First Order Lag
The open loop transfer function for a first order lag is:

It has one real pole.

A first order system is never underdamped (the pole is always on the real axis) and is
always stable.
Second Order Lag
A second order lag has two poles.


The 2nd order system becomes underdamped as the gain is increased, but is always stable
since the poles never cross the imaginary axis. We can calculate the center of gravity and
the breakaway point

but don't need to. Since the system is always stable, these numbers don't tell us much.
Third Order Lag
Our initial example showed a third order system. The center of gravity calculation is
needed to draw the asymptotes. We can easily calculate the breakaway point but probably
don't need it.

Second Order Lag with Zero
Consider the open loop transfer function


Note how one of the branches ends at the zero. This is the rare case where the center of
gravity provides no value -- because the asymptote is 180 degrees.
Concluding Remarks
Time constant and damping coefficient can be shown as circles and radii in the s-plane,
so it is possible to back-calculate a desired damping coefficient.
Root locus can't handle the exponential produced dead time, so it is necessary to use the
Pade' approximation.
If you want to include integral or derivative control, just lump it into the open loop
transfer function; unfortunately, if you want to look at multiple tunings, you'll need to
make multiple sketches.
Feedforward Control
A feedforward control law is used to compensate for the effect that measured dvs may
have on the cv. The basic idea is to measure a disturbance directly and take control action
to eliminate its impact on the process output. How well the scheme will work depends on
the accuracy of the process and disturbance models used to describe the system
dynamics. Feedforward control actually offers the potential for perfect control. However,
because of Plant Model Mismatch (PMM) and unmeasured / unknown disturbances this
is rarely achieved in practice. Consequently, feedforward control is normally used in
conjunction with feedback control. The feedback controller is used to compensate for any
model errors, unmeasured disturbances etc. and ensure offset free control.

Feedforward Control of a Continuous Stirred Tank Reactor
The reactor system under consideration is shown below. The reactor is fed by a stream
rich in reactant A, of concentration CA(in) and flowrate F(in). Within the system the
following exothermic reaction take place, A B C. Reactant A is converted to product
B, but at high temperatures B undergoes further reaction and is transformed to undesired
by-product C. The reactor is cooled by means of a heat exchanger.

The objective is to maintain the temperature of the reaction mass at the desired value
when subjected to changes in inlet concentration (Cin) and temperature (Tin).
Thus, the cv is reactor liquid temperature, the mv is the coolant flowrate to the heat
exchanger and the dvs are inlet concentration and inlet stream temperature. The
feedforward control loop may be configured as follows,

Here, 'FF' represents the feedforward control algorithm, 'CT' and 'TT' are symbols used to
describe the composition and the temperature transmitters. So, the disturbances are
measured and passed to a 'FF' device that calculates the necessary coolant flowrate to
compensate for any cv moves when the measured dv deviates from it's nominal value.

Gp(s) is a symbol used to represent the process dynamics. This is the relationship between
the coolant flow (the mv) and the temperature (the cv).
This could be a 1st order plus dead-time transfer function. Gd(s) is a symbol used to
describe the mathematical relationship between inlet concentration and reactor
temperature. The feedforward controller calculates the appropriate mv to ensure the cv
remains at SP.
Mathematical Details of the algorithm
Suppose that both the disturbance and the process dynamics are described
by 1st order transfer functions,

The feedforward control law is given by,

Therefore the transfer function describing the feedforward control law is,

The numerator dynamics is often referred to as the lead element while the denominator
dynamics is termed the lag. Often, dynamic elements are ignored (e.g. to simplify
implementation) leaving a gain only element,

This is a very simple algorithm to implement.

Feedforward control using steady-state models from chemical
engineering fundamentals
While notes in control enginnering texts tend to concentrate on transfer function process
descriptions, fundamental chemical engineering concepts should not be forgotton
entirely. Consider the following binary distillation column,

where F, D and B are the feed, distillate and bottoms flows (kmols/min) and xB and XD
are the compositions of the more volatile component (mvc) in the bottoms and distillate
stream respectively (mol %) . The objective is to design a feedforward control law to
maintain xD at the desired value when the column is subjected to changes in feedflow (F)
and feed composition (xf). The chosen manipulated variable is the distillate flowrate (D).
To complicate matters, measurements of xD and xB are not available.
To design the feedforward control law we need the mathematical relationships between
xD and D, F and xf. As discussed in earlier lectures, this information could be obtained
from plant experimentation. However, being chemical engineers we know that the
relationship between these variables is also available through the fundamental mass
balance relationships,
F = D + B
Fxf = DxD + BxB

Eliminating B from the component balance yields the desired relationship
between xD and D, F and xf. This is given by,
D = F(xf - xB) / (xD - xB)
xD and xB are not measured, but may be replaced by their desired values
(Xb SP and XD SP) to give,
D = F(xf - xBSP) / (xDSP xBSP)

This is a feedforward control law, it calculates the mv based upon dv and SP
information. It is a steady-state (gain only) algorithm.

Ratio Control
The objective of a ratio contol scheme is to keep the ratio of two variables at a specified
value. Thus, the ratio (R) of two variables (A and B),
R = A / B
Is controlled rather than controlling the individual variables. Typical ratio control
schemes include:
Maintaining the reflux ratio for a distillation column.
Maintaining the stoichiometric ratio of reactants to a reactor.
Maintaining air/fuel ratio to a furnace.

Implementation: method I

The flowrate of the two streams is measured and their ratio calculated using a 'divider'
(just a piece of extra electronics). The output of the divider is sent to the ratio controller
(which is actually a standard PI controller). The controller compares the actual ratio with
that of the desired ratio and computes any necessary change in the manipulated variable

Here one stream is under standard feedback control. The flow of the second stream is
measured and sent to a 'multiplier' (again just a piece of extra electronics) which
multiplies the signal by the desired ratio yielding the setpoint for the feedback control
law.

Final Remarks
Conventional control schemes can be used 90%+ of the time within the process
industries. The only drawback to success lies in the understanding correct implementation
of a particular scheme. The aim of these notes was to provide a basic review of various
control schemes (the methods were covered in much greater detail in process control I).
On its own, this knowledge is not particularly useful: anyone can regurgitate text-book
information. It is through process knowledge and understanding that appropriate control
schemes are chosen and, it is the choice of the most appropriate scheme that is crucial to
successful implementation.

You might also like