You are on page 1of 6

Linear System Theory Fall 2011 final exam

Questions with solutions

Exercise 1 [25%]
Consider the following transfer function:
G(s) =
(a) [7%] Show that

s2

1
+ 4s + 3

]
[ ]
0
1
0
x(t) +
u(t)
3 4
1
[
]
y(t) = 1 0 x(t)

x(t)

is a state space realization of G(s). What is the dimension of this realization? Is the realization
in controllable canonical form?
(b) [6%] Design an output feedback controller of the form
u(t) = ky(t) + r(t)
(where r(t) is an auxiliary input) and select the gain k so that the poles of the closed loop
system are both at s = 2.
(c) [5%] Assume that you would like to improve your controller so that the poles of the closed loop
system are all at s = 3. Can you achieve this by tuning the output feedback controller of
Part (b)? If not, how would you modify your design? (You do not need to provide a complete
design.)
(d) [7%] Using rst principles modeling, your friend from EPFL derived the following dynamics
for the system in question:

1
4 1 3
x(t)

= 0 1 0 x(t) + 0 u(t)
0
1 1 0
[
]
y(t) = 0 0 1 x(t)
Is this also a realization of the transfer function G(s)? Why is it dierent from the realization
in Part (a)? If your friend is right, what would this imply about the performance of your
controller in Parts (b) and (c)?
Hint: It is not necessary to completely invert a matrix in order to compute a transfer function.
Recall that if A = [Aij ] is a 3 3 matrix, (A1 )31 = A21 A32 A22 A31 .
Exercise 2 [25%]
Let (H, C, , ) be a Hilbert space, and A : H H be a linear map. Recall that an eigenvalue
of A is a scalar C such that A(v) = v for some vector v V , v = 0, which is called an
eigenvector with respect to .
(a) [5%] Suppose that A is invertible and A1 is its inverse (you can take for granted that A1 is
also linear). Show that, if is an eigenvalue of A, then = 0 and 1 is an eigenvalue of A1 .
(b) [5%] Dene p(A)(v) = p0 I(v) + p1 A(v) + p2 A2 (v) + + pn An (v), where p0 , p1 , , pn C, I
is the identity map in H, and An (v) = AA A(v), n times. Show that, if is an eigenvalue
of A, then p() = p0 + p1 + + pn n is an eigenvalue of p(A). You may assume without
proof that p(A) : H H is linear.
(c) [5%] Suppose that v is an eigenvector of A with respect to the eigenvalue , and w is an
eigenvector with respect to the eigenvalue . Show that, if = , then v and w are linearly
independent.
(d) [5%] Suppose that A is self-adjoint. Show that the eigenvalues of A are real numbers.
(e) [5%] Suppose that A is self-adjoint, v is an eigenvector of A with eigenvalue , and w is an
eigenvector with eigenvalue . Show that, if = , then v and w are orthogonal.
Solution.
1

(1) We have A(v) = v for some v = 0. Therefore


v = A1 A(v) = A1 (A(v)) = A1 (v) = A1 (v)
The number cannot be zero, otherwise we would have v = 0, which is a contradiction.
Dividing by we obtain 1 v = A1 (v), which proves the claim.
(2) We have A(v) = v for some v = 0. Therefore
(
)
p(A)(v) = p0 I + p1 A + p2 A2 + + pn An (v)
= p0 I(v) + p1 A(v) + p2 A(A(v)) + + pn A(A( A(v) ))
= p0 v + p1 v + p2 A(v) + + pn A(A( v ))
= p0 v + p1 v + p2 2 v + + pn n v
= (p0 + p1 + p2 2 + + pn n )v
= p()v
which proves the claim.
(3) Suppose that av + bw = 0 for some coecients a, b. Then
0 = A(0) = A(av + bw) = av + bw
0 = (av + bw) = av + bw
Subtracting the rst equation from the second,
b( )w = 0
Since w = 0 and = , we have b = 0, hence av = 0, and since v = 0 we also have a = 0;
hence, v and w are linearly independent.
(4) Suppose that A(v) = v for some v = 0. We have
v, v
v, v = v, v = v, A(v) = A (v), v = A(v), v = v, v =
hence R.
Since v = 0, it is also v, v = 0. Dividing by v, v, we obtain = ,
(5) We have
w, v = w, v = w, A(v) = A (w), v = A(w), v = w, v = w, v
Hence ( ) w, v = 0; since ( ) = 0, it must be w, v = 0.
Exercise 3 [25%]
Consider the following non-linear time varying system:
[
x(t)

x 1 (t)
x 2 (t)

[
=t

2 2
2 2

][

sin (x1 (t))


sin (x2 (t))

]
(1)

You are asked to analyze the stability properties at the origin. Having just completed the Linear
System Theory course you decide to linearize the system and look at the stability properties of the
linearization.
(a) [5%] Show that a rst order linear approximation of system (1) around the origin is given by:
[
]
2 2
x(t)

=t
x(t)
2 2
(b) [8%] Show that the state transition matrix of the linearized system is:
[
]
cos(t2 ) sin(t2 )
t2
(t, 0) = e
sin(t2 ) cos(t2 )
(c) [5%] Using your answer in part (b) determine the stability of the origin for the linearized
system.

(d) [7%] Consider now the time invariant systems:


[
]
[
]
2 2
0 0
x(t)

=
x(t), x(t)

=
x(t),
2 2
0 0

[
x(t)

2 2
2 2

]
x(t)

obtained by freezing the time varying dynamics at instances t = 1, t = 0, and t = 1.


Determine the stability properties of these three systems.

Solution - Stability (not detailed)


(a) This is probably easier to see if we treat each ode separately.
x1 (t) = f1 (x(t)) = 2t sin(x1 (t)) 2t sin(x2 (t))
x2 (t) = f2 (x(t)) = 2t sin(x1 (t)) 2t sin(x2 (t))
The rst order linearization of the two dimensional system about the origin equilibrium x0
is given by:
f1

Lx1 (t) = f1 (x0 ) +


x1 +

x1 x=x0

f2
Lx2 (t) = f2 (x0 ) +
x1 +

x1 x=x0

f1
x2

x2 x=x0

f2
x2

x2 x=x0

(b) To answer this question one has to invoke Theorem 4.2 of the course notes. Then it suces to
show that:

(t, 0) = A(t)(t, 0)
t
(0, 0) = I.
Recalling that (t, 0) uniquely characterizes the systems zero input response we have:
x(t) = (t, 0)x0
for the arbitrary initial condition x0 . Using any norm (equivalence in Rn ), we have to show
that the evolution is bounded and even better, globally exponentially stable
x(t) = (t, 0) x0

[
]
t2 cos(t2 ) sin(t2 )

x
= e
sin(t2 ) cos(t2 ) 0
[
]
2
cos(t2 ) sin(t2 )


= et
sin(t2 ) cos(t2 ) x0
2


et x0
Claiming global exponential stability here is clear.
(c) The system is now time invariant. It is sucient to look at the eigenvalues of matrix A. All
stability properties are equivalent in this case (Theorem 6.2). For the given system we have:
1,2 = 2 2i.
which means Re[] < 0. The system is asymptotically/exponentially stable - equivalent for
LTI systems.

(d) It is impossible to make a global comment about the stability of a non-linear system using
a linearized one. However, as suggested by Lyapunovs Indirect Method, provided that the
high order terms around the linearization point of the non-linear system are small enough,
we can claim that there exists an area around the origin that the non-linear system is also
exponentially stable. The region of attraction however is unknown. All in all, the non-linear
system is locally exponentially stable for an unspecied neighborhood about the origin.
Exercise 4 [25%]
Suppose that the linear time invariant system


2 1 2
0
x(t)

= Ax(t) + Bu(t) = 0 0 1 x(t) + 1 u(t)


0 1 0
0
[
]
y(t) = Cx(t) = 0 1 0 x(t)
represents the attitude dynamics of a hypothetical satellite designed by Abbey Road Space SystemsTM ,
where you are employed in the Control Division.
(a) [6%] Is the satellite controllable? Is it stabilizable? Justify your answer in each case.
(b) [7%] The team manager John asks you to design a time-invariant state feedback controller of
the form u(t) = Kx(t) so that the closed loop system converges to the origin at the rate e2t ?
Is this a reasonable requirement? If so, design a feedback matrix K R13 to implement it.
(c) [5%] John asked your colleague Paul to design an observer
x
= A
x + Bu(t) + L(y(t) C x
(t))
so that the observation error decays to zero at the rate e3t . As Paul was unable to perform
that task John decided to re him. Help Paul save his job by explaining to John that his
request was unreasonable.
(d) [7%] Your colleague Ringo from the Electronics Division comes up with a star tracker that can
be used to directly measure the rst state of the system. The model of the resulting satellite
becomes

0
2 1 2

0 0 1 x(t) + 1 u(t)
x(t)

=
0
0 1 0
]
[
0 1 0
x(t)
y(t) =
1 0 0
Is it now possible to satisfy the Johns requirement from Part (c)? Design, if possible, a gain
matrix L R32 so that the observation error decays to zero at the rate e3t .
Solution
(1) The controllability matrix

0 1 0
P = 1 0 1
0 1 0
has rank 2, which is less than the order of the system. Hence, the satellite is uncontrollable.
To check if it stabilizable, one should check if the uncontrollable modes are asymptotically
stable, i.e. if the real parts of the uncontrollable eigenvalues are less than zero. We have

+ 2 1 2
1 = ( + 2)(2 1)
|I A| = 0
0
1
The eigenvalues of the system are +1,1, and 2. Using the rank test Rank[I A B] = n
we can determine which modes are uncontrollable. We have

3 1 2 0
1 1 1 = 3, eigenvalue 1 is controllable
Rank 0
0 1
1 0

1 1 2 0
Rank 0 1 1 1 = 3,
0 1 1 0

0 1 2 0
Rank 0 2 1 1 = 2,
0 1 2 0

eigenvalue 1 is controllable

eigenvalue 2 is uncontrollable.

The uncontrollable eigenvalue is asymptotically stable and therefore we conclude that the
system is stabilizable.
(2) The convergence rate of response of a stable system is determined by the eigenvalue with
the largest real part. By designing a time-invariant state feedback controller of the form
u(t) = Kx(t) we can move to desired values only the controllable eigenvalues, but not the
uncontrollable ones. In this problem, the uncontrollable eigenvalue is at -2; therefore the
maximum convergence rate that can be achieved by the linear state feedback controller is
e2t . This is exactly what is required by the task. Hence, such a controller can be designed.
Let

]
[
K = k1 k2 k3
The characteristic polynomial of the closed loop system is

+2
1
2
|I (A + BK)| = k1 k2 (k3 + 1)
0
1

= 3 + (2 k2 )2 + (k1 2k2 k3 1) 2k1 2k3 2


The desired characteristic polynomial is D () = (+2)3 = 3 +62 +12+8. By equating
the coecients of the characteristic polynomial of the closed loop systems and of the desired
one, we get values for the gain matrix K: k2 = 4 and k1 + k3 = 5. One can select for
example: k1 = 5, k3 = 0.
(3) The observability matrix

0 1 0
O= 0 0 1
0 1 0
has rank 2, which is less than the order of the system. Hence, the system is unobservable.
However, it may happen that the system is detectable and that the eigenvalue is at -3
already. To check this one can nd the characteristic polynomial of the error dynamics

+ 2 l1 1 2
+ l2 1
|I (A LC)| = 0
0
l3 1
= ( + 2)[( + l2 ) + l3 1]
The unobservable mode is at -2, and therefore Johns requirement to place the eigenvalues
at -3 cannot be fullled.
(4) Now we can do what John was asking, since the new system is observable. Its observability
matrix

0
1
0
1
0
0

0
1
O=
0

2 1
2

has clearly full rank.

l1 l4
L = l2 l5
l3 l6
We must design L such that A LC has its eigenvalues at 3. Now

]
l1 l4 [
l4 l1 0
0 1 0
LC = l2 l5
= l5 l2 0
1 0 0
l3 l6
l6 l3 0

2 l4 1 l1 2
l2 1
A LC = l5
l6
1 l3 0
Let

Since the matrix A has a block diagonal structure it is easily seen that l1 , l5 , and l6 will have
a redundant eect on the eigenvalues and therefore can be beforehand assigned to zero, in
order to make the matrix LC as sparse as possible. With the coecient l4 we can inuence
the eigenvalue in the upper-left block of the closed loop system, and with the coecients
l2 and l3 we can inuence the eigenvalues in the lower-right block, and therefore assign the
eigenvalues of the closed loop system to desired values. We have

+ l4 + 2
1
2
0
+ l2 1
|I (A LC)| =
0
l3 1

= ( + 2 + l4 )(2 + l2 + l3 1)
The desired characteristic polynomial can be decomposed as D () = ( + 3)3 = ( +
3)(2 + 6 + 9). By equating the coecients of the characteristic polynomial of the closed
loop systems and of the desired one, we get the values for the gain matrix L: l4 = 1, l2 = 6,
l3 = 10.

You might also like