Professional Documents
Culture Documents
(c) Perry Li
x n, u
y = Cx + Du
Poles of transfer function are eigenvalues of A
Pole locations affect system response
stability
convergence rate
command following
disturbance rejection
noise immunity ....
96
(c) Perry Li
97
(c) Perry Li
s 1 0
s 1
(s) = det(sI A) = det 0
a0 a1 a2
Eigenvalues of A, 1, . . . , n are roots of (s) = 0.
(s) = sn + an1sn1 + . . . + a1s + a0
Therefore, if we can arbitrarily choose
a0, . . . , an1, we can choose the eigenvalues
of A.
98
(c) Perry Li
an1sn1 +. . .+
a1s+
a0
(s)
= ni=1(si) = sn +
Some properties of characteristics polynomial for its
proper design:
If i are in conjugate pair (i.e. for complex poles,
j), then a
0 , a
1 , . . . , a
n1 are real numbers;
and vice versa.
Sum of eigenvalues: a
n1 =
Pn
i=1 i
Product of eigenvalues: a
0 = (1)nni=1i
If 1, . . . , n all have negative real parts, then a
i > 0
i = 0, . . . n 1.
If any of the polynomial coefficients is non-positive
(negative or zero), then one or more of the roots
have nonnegative real parts.
M.E. University of Minnesota
99
(c) Perry Li
with
0
1
0
0
0
1
Ac =
(a0 + k0) (a1 + k1) (a2 + k2)
Thus, to place poles at 1, . . . , n, choose
a
0 = a0 + k0 k0 = a
0 a0
a
1 = a1 + k1 k1 = a
1 a1
..
a
n1 = an1 + kn1 kn1 = a
n1 an1
100
(c) Perry Li
z = T 1x;
Az = T 1AT,
0
..
= T 1B
Bz =
0
1
0
1
0
...
0
0
1
.
.
.
Az =
0
...
0
1
a0 . . . an2 an1
0
..
Bz =
0
1
101
(c) Perry Li
102
(c) Perry Li
Proof:
Only if: If the system is not controllable, then
using Kalman decomposition, there are modes that
are not affected by control. Thus, eigenvalues
associated with those modes cannot be changed.
This means that we cannot tranform the system
into controller canonical form, since otherwise, we
can arbitrarily place the eigenvalues.
If: Let us construct T . Take n = 3 as example,
and let T be:
T = [v1 | v2 | v3]
0
1
0
0
1 T 1;
A = T 0
a0 a1 a2
This says that v3 = B.
0
B = T 0
1
0
1
0
0
1
AT = T 0
a0 a1 a2
M.E. University of Minnesota
(11)
103
(c) Perry Li
104
(c) Perry Li
T = v1 v2 v3 = B
AB
a1 a2 1
A2B a2 1 0
1 0 0
105
(c) Perry Li
K = Kz T 1.
106
(c) Perry Li
107
(c) Perry Li
x = (A + BF )x + bv
108
(c) Perry Li
2 1 0
A = 0 2 0 ;
0 0 2
1 3
B = 1 0
0 1
109
(c) Perry Li
110
(c) Perry Li
t1
controllability
(to
zero)
t0
111
(c) Perry Li
112
(c) Perry Li
t1
t0
t1
t0
is:
u( ) = e4( t0)B T ( )(t0, )T H(t0, t1)1x(t0).
and when evaluated at = t0,
u(t0) = B T (t0)H(t0, t1)1x(t0).
Thus, the proposed control law is the least norm
control evaluated at = t0. By relating this
to a moving horizon [t0, t1] = [t, t + T ], where t
M.E. University of Minnesota
113
(c) Perry Li
114
(c) Perry Li
Observer Design
x = Ax + Bu
y = Cx
115
(c) Perry Li
(12)
This looks like the open loop observer except for the
last term. Notice that C x
y is the output prediction
error, also known as the innovation. L is the observer
gain.
Let us analyse the error dynamics e = x
x.
Subtracting the observer dynamics by the plant
dynamics, and using the fact that y = Cx,
e = Ae L(C x
Cx) = (A LC)e.
If A LC is stable (has all its eigenvalues in the LHP),
then e 0 exponentially.
The estimated state dynamics are:
B
x
A
LC
x
u
+
=
0
e
0 A LC
e
Notice that the observer error e is not controllable from
u.
M.E. University of Minnesota
116
(c) Perry Li
u = v LT C.
117
(c) Perry Li
T
I A
= rank
C
118
(c) Perry Li
Properties of observer
The observer is unbiased. The transfer function
from u to x
is the same as the transfer function
from u to x.
The observer error e = x
x is uncontrollable from
the control u. This is because,
e = (A LC)e
no matter what the control u is.
Let (t) := y(t) C x
(t) be the innovation. Since
(t) = Ce(t), the transfer function from u(t) to
(t) is 0.
This has the significance that feedback control of
the innovation of the form
U (s) = K X(s)
Q(s)(s)
where A BK is stable, and Q(s) is any stable
controller (i.e. Q(s) itself does not have any
unstable poles), is necessarily stable. In fact, any
stabilizing controller is of this form! (see Goodwin
et al. Section 18.6 for proof of necessity)
M.E. University of Minnesota
119
(c) Perry Li
X(s)
= T1(s)U (s) + T2(s)Y (s)
where
T1(s) := (sI A + LC)1B
T2(s) := (sI A + LC)1L
Both T1(s) and T2(s) have the same denominator,
which is the characteristic polynomial of the
observer dynamics, obs(s) = E(s) = det(sI
A + LC).
With Y (s) given by Go(s)U (s) where Go(s) =
C(sI A)1B is the open loop plant model, the
X(s)
= [T1(s) + T2(s)G0(s)] U (s)
= (sI A)1BU (s)
i.e. the same as the open loop transfer function,
from u(t) to x(t). In this sense, the observer is
unbiased.
M.E. University of Minnesota
120
(c) Perry Li
A
LC
u
+
=
0
e
0
A
LC
e
121
(c) Perry Li
122
(c) Perry Li
B
x
BK
v
+
B
x
A BK LC
BK
x
B
+
v
A LC
e
0
123
(c) Perry Li
Remarks:
Seperation principle - the set of eigenvalues of
the complete system is the union of the eigenvalues
of the state feedback system and the eigenvalues
of the observer system. Hence, state feedback and
observer can in prinicple be designed separately.
{eigen values} = eig(A BK) eig(A LC)
f (s, 0)
E(s)
(13)
124
(c) Perry Li
where
det(sI A + LC + BK)
L(s)
= 1 + KT1(s) =
E(s)
E(s)
KAdj(sI A)J
P (s)
= KT2(s) =
E(s)
E(s)
P (s)
= K(sI A + LC + BK)1L
L(s)
[See Goodwin pp. 512 for proof]
Controller can be written as a two degree of freedom
controller form:
E(s)
P (s)
U (s) =
V (s)
Y (s)
L(s)
E(s)
Or as a 1 degree of freedom controller form:
U (s) =
where V (s) =
P (s)
(R(s) Y (s))
E(s)
P (s)
E(s) R(s).
125
(c) Perry Li
Innovation feedback
The innovation is the output prediction error:
:= y C x
= Ce
Therefore,
126
(c) Perry Li
127
(c) Perry Li
P (s)
Bo(s)
L(s)
Ao(s)
U (s) =
Y (s)Qu(s)
Y (s)
U (s)
E(s)
E(s)
E(s)
E(s)
So(s) =
Bo(s)Ao(s)
Ao(s)L(s)
Qu(s)
E(s)F (s)
E(s)F (s)
(14)
To(s) =
Bo(s)P (s)
Bo(s)Ao(s)
+ Qu(s)
E(s)F (s)
E(s)F (s)
(15)
128
(c) Perry Li
so that
So(s) = 1 Qu(s)
To(s) = Qu(s)
Bo(s)
E(s)
Bo(s)
E(s)
o (s)
In this case, it is common to use Q(s) := Qu(s) AE(s)
to get the formulae:
So(s) = 1 Q(s)Go(s)
(16)
To(s) = Q(s)Go(s)
(17)
129
(c) Perry Li
130
(c) Perry Li
Adj(sI Ad)
xd(0) = Cd
xd(0)
det(sI Ad)
131
(c) Perry Li
132
(c) Perry Li
d x
A BCd
x
B
L1
=
+
u
(y C x
)
0 Ad
x
d
0
L2
dt xd
x
u = Cdx
d + v K x
= v K Cd
x
d
where L = [LT1 , LT2 ]T is the observer gain.
controller can be simplified to be:
d x
dt xd
The
A BK L1C 0
d
L2C
Ad x
B L1 v
0
L2 y
x
+v
u = K Cd
x
d
=
133
(c) Perry Li
u = [Ko Ka]
xa
where x
is the observer estimate of the original plant
itself.
x
= A
x + Bu + L(y C x
).
Notice that xa need not be estimated since it is
generated by the controller itself!
M.E. University of Minnesota
134
(c) Perry Li
sI A + BK + LC
Ka
0
BK
sI Ad
1
L
CdT
135
(c) Perry Li
136