You are on page 1of 9

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO.

11, NOVEMBER 1989 1123

Adaptive Control of Linearizable Systems

Abstract-In this paper we give some initial results on the adaptive A number of applications of these techniques have been made:
control of minimum-phase nonlinear systems which are exactly input-their chief drawback, however, appears to arise from the fact that
they rely on an exact cancellation of nonlinear terms in order to
output linearizable by state feedback. Parameter adaptation is used as a
get linear input-output behavior. Consequently, if there are errors
technique to robustify the exact cancellation of nonlinear terms, which is
or uncertainty in the model of the nonlinear terms, the cancel-
called for in the linearization technique. Only the continuous-time case is
lation is no longer exact. In this paper we suggest the use of
discussed in this paper; extensions to the discrete-time and sampled-data
case are not obvious. parameter adaptive control to help robustify, i.e., make asymptot-
ically exact the cancellation of nonlinear terms when the uncer-
tainty in the nonlinear terms is parametric. Some other attempts in
I. INTRODUCTION
this regard have been made by Marino and Nicosia [14] and
T is well known that, under rather mild assumptions, the Nicosia and Tomei [15], using a combination of high gain, sliding
I input-output response of a nonlinear system can be rendered modes, and adaptation. Some previous work in this spirit is in
linear by means of state feedback. This was implicitly or explicitly Nam and Arapostathis [13]. Our development is, we believe,
pointed out in several papers dealing with the study of noninteract- considerably more general and straightforward than theirs (specif-
ing control of nonlinear systems, like those of Porter [22], Singh ically, no error augmentation and stronger stability theorems) and
and Rugh [16], Freund [17], and Isidori, Krener, Gori-Giorgi, was in tum motivated by our work in the adaptive control of a
and Monaco [ 181. Independently, a substantially identical synthe- specific class of linearizable systems-rigid link robot manipula-
sis technique was successfully implemented in some relevant tors (see Craig, Hsu, and Sastry [5] for details, including
practical applications, like the control of flight dynamics (Meyer implementation of the scheme on an industrial robot arm).
and Cicolani [lo]) and the control of rigid-link robot manipulators We would also like to mention the work done in parallel by
via the so-called computed torque method, to mention a few. Taylor, Kokotovic, and Marino [20] on the adaptive control of
Parallel to these developments, beginning with the work of fully-state linearizable single-input, single-output systems. While
Brockett [23], several authors studied the problem of when the our scheme specializes to their scheme in the instance that the
differential equation relating the input to the state can be rendered system is state (rather than input-output) linearizable, their paper
linear via state feedback and coordinates transformation. The also considers the effect of parasitic dynamics on the adaptation
problem was completely solved by Jakubczyk and Respondek [24] scheme. Taylor, Kokotovic, and Marino prove the robustness of
and, independently, by Hunt, Su, and Meyer [25]. The former their scheme to parasitics; we have, however, not undertaken such
design technique is often referred to as exact input-output a study here.
linearization, while the latter one as exact state-space lineariza- The paper is organized as follows: we give a brief review of
tion. The bridge between the two techniques lies in the fact that input-output linearization theory for continuous-time systems
the design of a state-space linearizing control is equivalent to the along with the concept of a minimum-phase nonlinear system as
design of output functions for which input-output linearization developed in Bymes and Isidori [3] in Section II. We discuss the
is possible. The theory is now well developed and understood adaptive version of this control strategy in Section I11 along with
(see, for instance, expository surveys in Isidori [26], Isidori [8], its applications to the adaptive control of robot manipulators. In
and Claude [4] for the continuous-time case. For the discrete-time Section IV, we collect a few comments about the discrete-time
and sampled-data versions of the theory, see Monaco and and sampled-data cases along with some future directions.
Normand-Cyrot [ 1 13, Monaco, Normand-Cyrot, and Stomelli
[ 121, and Jakubczyk [27]. The class of systems is described (in the 11. REVIEWOF EXACTLINEARIZATION TECHNIQUES
continuous-time case) by
P
A . Basic Theory
X = f ( x ) + gi(x)ui A large class of nonlinear control systems can be made to have
i= I
linear input-output behavior through a choice of nonlinear state
feedback control laws. We review the theory here in order to fix
Y1= hl ( x ) notation. Consider the single-input, single-output system

i=f(x)+g(x)u
yp=hp(x)
Y =h(x) (2.1)
with x E Wn, U, y E WP, and f,g;, h, smooth functions.
with x E R; f,g , h smooth. Differentiating y with respect to
time, one obtains
Manuscript received March 4, 1988; revised March 10, 1989. Paper
recommended by Past Associate Editor, P. Crouch. This work was supported
in part by NASA under Grant NAG2-243 and by the U.S. Army Research j = L,h + L,hu. (2.2)
Office under Grant DAAG29-85-K-0072. The work of the second author was
supported in part by a McKay Visiting Professorship held at the University of Here, Lfh, L,h stand for the Lie derivatives of h w.r.t. f,g ,
California, Berkeley. respectively. If ( L , h ) ( x ) # 0 tlx E Wn,then the control law of
S. S. Sastry is with the Electronics Research Laboratory, University of
California, Berkeley, CA 94720. the form a ( x ) + P(X)LJ,namely
A. Isidori is with the Dipartimento di Informatica e Sistemistica, Universik
di Roma, La Sapienza, Rome, Italy.
IEEE Log Number 8930790.

OO18-9286/89/11OO-1123$01.OO 0 1989 IEEE


1124 IEEE TI?ANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 1 1 , NOVEMBER 1989

yields the linear system If A(x) E RpxP is bounded away from singularity, the state
feedback control law

In the instance that L,h(x)


to obtain
y=u.
= 0, one differentiates (2.2) further
(2.3)

U= -A(x)-' = [ ]
L;l hl
i
L y h,
+A(x)-'U (2.12)

j i =L j h + (L,Lfh)u. (2.4)
yields the closed-loop decoupled, linear system
In (2.4) above, L j h stands for Lf(Lfh) and L,Lfh stands for
L,(Lfh). As before, if L,Lfh # 0 Vx E R", the law
(2.13)

linearizes the system (2.4) to yield Once linearization has been achieved, any further control objec-
tive such as model matching, pole placement, tracking may be
y=v. easily met. The feedback law (2.12) is referred to as a static-state
feedback linearizing control law.
More generally, if y is the smallest integer such that L,L h = 0 If A (x) defined in (2.10) is singular, linearization may still be
f o r i = O;-.,y - 2andL,L;-'h(x) # O v x E R",&enthe achieved using dynamic state feedback. The development may be
control law followed by using integrators before some of the inputs; exact
conditions under which linearization may be achieved by dynamic
state feedback are given, for instance, in Descusse and Moog [6].

B. Minimum-Phase Nonlinear Systems


yields
We briefly review the definitions of minimum-phase nonlinear
y(r)= U. (2.6) systems due to Byrnes and Isidori [3].
1) The Single-Input, Single-Output Case: The theory is
The theory is considerably more complicated if L,L;-'h = 0 much simpler for the single-input, single-output case. We recall
for some values of x. We do not discuss this case here. For the the following definition.
multiinput, multioutput case, consider the p-input, p-output Definition: The system (2.1) is said to have strong relative
nonlinear system of the form degree y if
x=f(x)+gl(x)ul+ * * * +gp(x)up L,h(x)=L,Lfh(x)= **-=L,L;-2h(x) = 0
Yl = hl(X) and
L,L;-'h(x)#O vx E W".
Thus, the system (2.1) is said to have strong relative degree y if at
u p = hp (x). (2.7)
each x E W"the output y needs to be differentiated y times before
Here x E R", U E Rp, y E RP, and f, gi, hi are assumed terms involving the input appear on the right-hand side as in (2.5)
smooth. Now, differentiate the outputs y j with respect to time to and (2.6) above.
get If a system has strong relative degree y, it is easy to verify that
at each xo E R" there exists a neighborhood CJoof xosuch that the
P
mapping
yj=Lfhj+ (L,jhj)Ui. (2.8)
i= I T : uo-+a"

In (2.8) Lfhj stands for the Lie derivative of hi with respect to f , defined as
similarly Lgihj. Note that if each of the (Lgihj)(x) = 0, then the Ti(x) = Z I 1 = h (x)
inputs do not appear in (2.8). Define y j to be the smallest integer
such that at least one of the inputs appears in y p ) , i.e.,
T2 (X) = 212 = Lfh (X)
y p ) = L ; j h , + g Lgi(L?-'hj)ui (2.9) T, (X) = ZI,= L; ~ I h (x)
i= 1
with
with at least one of the Lgi(LY-'hj) # OV x. Define the p x p dTi(x)g(x)=O for i = y + l , a . . , n
matrix A (x) as

A(x)= [ Lg1(L;'-'h1)
...
Lgl(LT-'hp)
* * *

.* .
LgP(L;'-'hl)

Lgp(L)-'hp)
. (2.10)
is a diffeomorphism onto its image. If we set z2 = (T,, 1,. . .,
T,) =, it follows that the equations (2.1) may be written in the
normal form as
i l l = 212

['"I [':*I [*'I


Then equations (2.9) may be written as

yy
=
Lyh,
+A(x)
UP
. (2.11)
(2.20)
SASTRY AND ISIDORI: ADAPTIVE CONTROL OF LINEARIZABLE SYSTEMS 1125

y=z11. (2.21) Proposition 2. I (Bounded Tracking in Minimum-Phase


Systems): Assume that the zero dynamics of the nonlinear system
In (2.20) fl(zl, z2) represents L ; h ( x ) and g l ( z I z2)
, represents (2.1) or equivalently (2.20), (2.21) as defined in (2.22) are
LgL;-Ih(x). Now if x = 0 is an equilibrium point of the exponentially-stable. Further assume that $ ( z I ,z2) in (2.20) is
underdriven system (i.e., f ( 0 ) = 0) and h(0) = 0, then the Lipschitz in z l , z2.Then the control law (2.24) results in bounded
dynamics tracking [i.e., x E R"bounded and y ( t ) y M ( t ) ]provided
+ , that
yM,j M*, * , y z - are bounded.
z 2 = $(O, z2) (2.22) Proof: From the foregoing discussion it only remains to
show that z2 is bounded. We accomplish this using a converse
are referred to as the zero-dynamics. theorem of Lyapunov (see Hahn [7]). This proof technique has
Remark: The dynamics are referred to as the zero dynamics also been used in Bodson and Sastry [2].
since they are the dynamics which are made unobservable by state First since (2.22) is exponentially stable and $ is Lipschitz in
feedback. It might help the reader to note that the linearizing state z2, a converse Lyapunov theorem implies that 3 V(z2)such that
feedback law is the nonlinear equivalent of placing some of the
closed-loop poles at the zeros of the system, thereby rendering
them unobservable. I I
a1 z2 5 V(z2)5 a2 I z2 I
Note that the subset
L = { x E U ' : h(X)=L,h(x)=...=L;-Ih(x)=O}
= { x E uo: z1=0}
(2.25)
can be made invariant by choosing
Now the control law (2.24) yields bounded zI, i.e.,
(2.23)
(Zl(t)(c_K Vt. (2.26)
The dynamics of (2.22) are the dynamics on this subspace. The
nonlinear system (2.1) is said to be minimum phase if the zero- Using (2.25) in (2.20) yields
dynamics are asymptotically stable.
Remark: Note that the previous analysis identifies the normal
form (2.20), (2.21) and the zero-dynamics (2.22) only locally,
around any point xo of W".Recent work of Bymes and Isidori
[29] has identified sufficient conditions for the existence of a 5 - a3 1 z2 12+ a4KL I zz I (2.27)
globally defined normal form. They have shown that a global
version of the notion of zero dynamics is that of a dynamical with L representing the Lipschitz constant of $ ( z l ,z2)w.r.t. zI. It
system evolving on the smooth submanifold of Wn is now easy to see that

L={x E W":h(X)=Lfh(X)=...=L;-'h(x)=O}
vs0
and hereby defined by the vector field
Using this along with the bounds (2.25) it is easy to establish that
z2 is bounded.
Remarks:
(note that this is a vector field of L becausef(x) is tangent to L ) . 1) Proposition 2.1 establishes that a bounded input to the
If L is connected and the zero dynamics is globally asymptotically exponentially stable, unobservable dynamics yields a bounded
stable (i.e., if the system is globally minimum-phase, then the state trajectory z2.
normal forms are globally defined if the vector fields 2) The hypothesis of Proposition 2.1 calls for a strong form of
stability-exponential stability; in fact, counterexamples to the
g ( x ) , adj-g(x), . . ., ad;-'g(x) Proposition exist if the zero-dynamics are not exponentially
stable, for example, if some of the eigenvalues of d/dz2$(0, z2)lie
are complete, where on the jw-axis.
3) The hypotheses of Proposition 2.1 can, however, be
weakened substantially by requiring only that all trajectories of
(2.22) are eventually attracted to a compact set, for instance, by
requiring that
Note that this condition can be guaranteed by requiring that the
vector fields in question are globally Lipschitz continuous, for zc$(O, z 2 ) -~4 z 2 1 2 for I z ~ ~ z R . (2.28)
example. In this paper we systematically assume global minimum-
phase property and the existence of globally defined normal Condition (2.28) can be thought of as being an attractivity
forms. condition-a simple contradiction argument involving
An interesting application of the notion of normal form and
minimum-phase property is the following one. Assume the control
U in (2.23) is chosen so that y ( t ) tracks y M ( t ) ,i.e.,

y ' y ~ ) + a l ( yM( Y - ' )Y-( ? ? I ) ) + . .. +a,(yM-y) (2.24) should convince the reader that (2.28) guarantees that all
trajectories of (2.22) eventually enter a ball of radius R . With
with a],. * * , ay chosen SO that S Y a1sY-I * * * +
ay is a + + condition (2.28) replacing the exponential stability hypotheses of
Hurwitz polynomial. It is easy to see that this control results in Proposition 2.1 and the Lipschitz dependencies as before, we see
asymptotic tracking and bounded state zr (or equivalently y , that the proof goes through. Condition (2.28) itself can be restated
9,. * y7-l) provided y M ,j ~. . , y s - l ) are bounded.
e , e , in a form reminiscent of (2.28) involving a more general
1126 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 11, NOVEMBER 1989

Lyapunov function V ( z , ) with the weakening that in Isidori and Moog [9], depending on which linear definition one
chooses to generalize: 1) the dynamics associated with the
dV subsystem that becomes unobservable when a certain state
- . $(O, Z2)S - W I Z 2 1 only for lz21r R . (2.29) feedback is implemented (to the extent of maximizing unobserva-
dz2
bility); 2) the internal dynamics consistent with the constraint that
Thus, bounded tracking only requires that the conditions (2.25) the output is zero; 3) the dynamics of the inverse system. In
only hold outside a ball of radius R. We refer to this condition as particular, the notion that is behind 2) has some interesting
exponential boundedness of the zero dynamics. features that render it particularly suitable for the design of
2) The Multiinput, Multioutput Case: Definitions of mini- stabilizing feedback (see Byrnes and Isidori [28]) and to study
mum phase for the square multiinput, multioutput nonlinear asymptotic tracking.
systems parallel the development on the SISO case above only if
the matrix A ( x ) defined in (2.10) is nonsingular for all x E Wn. III. ADAPTIVE
CONTROL
OF LINEARIZABLE
SYSTEMS
In this case, locally around any point xo of R", a diffeomorphism
(zl, z2) = T ( x ) can be defined, with In practical implementations of exactly linearizing control laws,
the chief drawback is that they are based on exact cancellation of
zT=(hl, Lfhl, * * e , L j ' - ' h l , h2, * . a , nonlinear terms. If there is any uncertainty in the knowledge of
the nonlinear functions f and g , the cancellation is not exact and
Ly-lhz, e * * , hp, * * a , LT-'hp). the resulting input-output equation is not linear. We suggest the
In these coordinates the equations (2.1) read use of parameter adaptive control to get asymptotically exact
cancellation. The following simple example makes our philosophy
i l l =z12
clear.

A . The SISO Relative Degree One Case


Consider a SISO system of the form (2.1) with L,h(x) # 0
(relative degree one). Further, let f ( x ) and g ( x ) have the form

"I
f( x i = effi(x) (3.1)
i= 1
(2.30)
"2
Yl =zll g ( x )= Bfgj(x) (3.2)
j=I
Y2 = Zh,+ 1
with e,!, i = 1, e , nl; Of, j = 1, -
n2 unknown parameters
a ,

and thef,(x), g j ( x ) known functions. At time t , our estimates of


the functions f and g are, respectively,
Y m =zl(m-up+ 1). (2.31)
"I
In (2.28)fl(zl, z2)stands for L;lh,(x), g l ( z I ,z2)for the first
row of A ( x ) ,* in the (zl, zz) coordinates. Note that the z2
m =
i= I
8f(t)fi(x) (3.3)
variables are driven by the input. Consequently, a change is
needed in the definition of zero-dynamics. Let u*(zI,zz) be the
linearizing control, i.e., (3.4)

u*(zl, z2)= - [ i' ] [ ]


gl(z1 z2)

g m ( Z 1 , 22)
fI(Z1, z2)

fm(Z1r
i
z2)
. (2.32) with e,!, ef standing for the estimates of the parameters e;, Of,
respectively, at time t . Consequently, the control law U is
replaced by
Using this control in the equations for z2 and assuming as before
that 0 E W" is the equilibrium point of the undriven system (i.e.,
f ( 0 ) = 0) and hI(0) = = hm(0) = 0 , we see that the
* e -

subspace ((0, z2)) C R" is an invariant subspace and the zero


dynamics are the dynamics of

i 2 = $l(O, 22) + $2(0, Z 2 ) U * ( O , z2)


--
and L,h, Lfh are the estimates of L,h, Ljh, respectively, based
on (3.3), (3.4), i.e.,
(3.5)

:= $(O, z2). (2.33) (3.6)


If A ( x ) is nonsingular for all x E R", the global notion of zero
dynamics and the conditions for the existence of normal forms are "2
still similar to those illustrated in the SISO case. When global
normal forms exist, then Proposition 2.1 can be easily verified to LT= ef(t)L,h. (3.7)
j= I
hold for tracking with bounded state variables if (2.33) is
exponentially stable (and Lipschitz in z l , z2). Also, the same If we define 8 E Rnl+"2to be the "true" parameter vector { e l T ,
remarks as those made after Proposition 2.1 hold for the case that ,92T)T, 8 E R"l+"2 the parameter estimate, and 4 = 8 - 8 the
the zero-dynamics are exponentially attractive. parameter error, then using U of (3.5) in (2.2) yields after some
If A ( x ) is singular, the definition of zero dynamics is more calculation
subtle. As a matter of fact, there are different and nonequivalent
ways to extend the concept of "transmission zero," as pointed out j=u+$'TWI+42~W2 (3.8)
SASTRY AND ISIDORI: ADAPTIVE CONTROL OF LINEARIZABLE SYSTEMS 1127

with 3) It has recently become popular in the literature not to use


adaptation but to replace the control law of (3.12) by the sliding

-
mode control law
(3.9)
1 -
U=- ( - L f h + j M + k sgn ( y M - y ) ) . (3.15)
L J
and L,h
The error equation (3.11) is then replaced by one of the form
e + k sgn e = d ( t ) (3.16)

where d ( t ) is a mismatch term which may be easily bounded


using bounds onfi, g j , and the q$s above. It is then easy to see
The control law used for tracking is that if k > sup, I d ( t)l , then the error e goes to zero (in fact, in
finite time). This philosophy is not at odds with adaptation as
U =M
, + ff ( Y M - Y ) described in Theorem 3.1. We feel that it can be used quite
gainfully when the parameter error 4( t ) is small. If, however, 4 is
and yields the following error equation relating y - y , := e to large, the gain k is large resulting in unacceptable chatter, large
the parameter error 4 = (4IT,+ 2 T ) T : control activity, and other such undesirable behavior. Adaptation
offers a less traumatic scheme of parameter tuning in this
e + ae= ~ $ ~ w . (3.11) instance.
4) It is important to note that here and in what follows the
w E Rnl+2 is defined to be the concatenation of w l ,w2.Now, it parameter update laws require knowledge of the state variables.
is easy to state the following theorem. This in turn is necessitated by the state-feedback linearization
Theorem 3. I (Adaptive Tracking): Consider a minimum- methodology.
phase nonlinear system of the form (2.1) with the assumptions on
f , g as given in (3.3), (3.4). Define the control law
B. Extensions to Higher Relative Degree SISO Systems

- U=^
L,h
1
(-~+,,+ff(y,-y)).

Assume that L,h as defined.by (3.7) is bounded away from zero.


Then, if y , is bounded, the parameter update law
(3.12) We first consider the extensions of the results of the previous
section to SISO systems with relative degree 7 , i.e., L,h =
L,Lfh = ... = L L y - 2 h = 0 with L,L;-lh # 0. The
g f
nonadaptive linearizing control law then is of the form
1 -
4 = -(Y-YiM)W (3.13) U= +- ( - L;h +U). (3.17)
L
-
yields bounded y ( t ) asymptotically converging to yw( t ) . Further,
all state variables x( t ) of (2.1) are bounded. Iff and g are not completely known, but of the form (3.1), (3.2),
Proof: The control law (3.12) yields the error equation we need to replace L ] h and L,L;-Ih by their estimates. We
define these as follows:
e + ae = 4 Tw (3.11) A
Lyh := L:h (3.18)
along with the update law

$= -ew. (3.13)
For y 2 2, (3.18), (3.19) are not linear in the unknown
The Lyapunov function u(e, 4) = 1 /2e2 +
1/24 4 is decreasing parameters 6;.For example,
along trajectories of (3.11), u(e, 4) = a e 2 5 0; thereby
establishing bounded e , 4. Also 5, e2 dt < W . However, to
establish that e -P 0 as t -P 00 we need to verify that e is uniformly
continuous (or alternately that e is bounded). This e r n needs w ,

-
a continuous function of x (well defined since L,h is bounded
away from zero) to be bounded. Now, note that bounded e,
bounded y M y is bounded. From this and the minimum-phase
assumption (cf. Proposition 2.1) it follows that x is bounded.
Hence, w is bounded and e is uniformly continuous and so tends to
and

zero as t -+ 03.
Remarks:
and so on. The development of the preceding section could easily
1) Prior bounds on the parameters O2 are frequently sufficient to
guarantee that L T is bounded away from zero. Several standard be repeated if we defined each of the parameter products to be a
new parameter in which case the B f B 1 and efe: of (3.20) and
techniques exist in the literature for this purpose (see Sastry and
(3.21) are parameters. Let 8 E W kbe the k-(large !) dimensional
Bodson [ 191).
2) Theorem 3.1 makes no statement about parameter conver- vector of parameters e;, Of, Ote;, 0; e:, -
.. Thus, for example,
gence. As is standard in the literature, one can conclude from i f y = 3 then 8 contains e;, e;, e;O:, e;e;e;, Ole2, e;e;e:. NOW
(3.1 l ) , (3.13) that e, 4 both converge exponentially to zero if w is
implemented is
for the purposes of tracking, the control law to be J.

sufficiently rich, i.e., 3 f f l f f 2 , 6 > 0 such that - +l I)) )-+ .


v = Y ~ ) + f f l ( y gY . . + f f (Y M - Y )
7

cqZ2 w w Tdtra2Z. (3.14)


where j , = Lfh, j i = L j h , etc., are state feedback terms. In the
absence of precise information about Lfh, L ; h ; . * , etc., the
tracking law to be implemented is
The condition (3.14) is impossible to verify explicitly ahead of
time since w is a function of x . i = y $ ) + o l , ( y $ l f) - ~ ) + +a,(y,-y). (3.22)
1128 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 11, NOVEMBER 1989

U=-
1
-
The overall adaptive control law then is

L,L;-lh
A
( - LJh + ;). (3.23)

Using this yields the error equation (with CP := 8 - 6


Note that the last two terms are not equal and refer, respectively,
to each component of W being filtered by M ( s ) before being
multiplied by 6 and filtering of 6 W by M ( s ) .If 6 were indeed
constant they would be identical. This observation enables us to
rewrite
representing the parameter error) el=e+(*TM(s)W-M(s)*TW). (3.32)
eh)+ ale(,- I ) + . . . + a7e= I@ Wl + I@ W2. (3.24) Note that el in the form (3.31) can be obtained from measurable
signals unlike (3.32), since 9 is not available. However, (3.32) is
The two terms on the right-hand side arise, respectively, from the critical to our analysis, since we may use (3.30) in (3.32) to get
mismatch between the ideal linearizing law and the actual
linearizing law and the mismatch between the ideal tracking el = a T M ( s )W . (3.33)
control U and the actual tracking control 6. For definiteness,
consider the case that y = 2 and nl = n2 = 1. Then with f3= = For convenience, we will denote
[el, e2, BLe2],we get
E := M ( s )w E w. (3.34)

The error equation (3.33) is key to the choice of the identification


algorithm. Here is one choice of parameter update law
and
(3.35)
W r = [aiLf,h (O(OO]. (3.26)

Note that Wl and W2 can be added to get a new regressor vector Equation (3.35) is referred to as a normalized gradient-type
W . It is of interest to note that 0' cannot be explicitly identified in algorithm [unlike the unnormalized update law (3.13)]. Some
this case since the regressor m_ultiplying it is zero. Also note that properties of @, el follow immediately with no assumptions on the
W is a function of x,y M ,and 8. Note that terms involving only O2 boundedness of E . In what follows we will use the following
or even any products of the 0' are absent and so may be dropped notation.
from the vector 8. i) 0 is a generic L2 n L , function which goes to zero as t +

We keep the form (3.24) of the error equation. Note that SY + 00.
alsY-' + +
* * ayis Hurwitz by choice of tracking control. For ii) y is a generic L2 fl L , function.
the purposes of adaptation we need a signal of the form iii) K is a (large) bound.
iv) llzllt will refer to the norm suprr, I z ( T ) ~ , the truncated L,
e l = p l e ( , - l ) +.. . + P,e (3.27) norm.
Proposition 3.2 (Properties of the Identifier): Consider the
with the transfer function error equation
plsY-l+ . . *+ p , P / a , s Y - I + * . .+a, (3.28) el=I@=t (3.35)
strictly positive real. Indeed, if such a signal el were measurable, with the update law
the basic tracking theorem would follow immediately from
arguments similar to the linear arguments. The difficulty with
constructing the signal (3.27) is that e, e @ ) ,*. are not (3.36)
measurable since
Then
e = YM- Lf h
I@ E L,, & E L2 f l L, and
L;=yM-L
P I I @ T t ( t ) l ~ ~ ( l + l l E l l tV) t . (3.37)
and so on, with the L h not explicitly available.
f
Motivated by the linear case, where a so-called augmented Proof: Consider the Lyapunov function
error scheme is necessitated, we define the augmented error.
Some notation is needed at this point. Define V@)=+TI@.

Then we have

(3.38)
Then the equation (3.29) may be written as
so that we get that 9 E L,. Since [ Vdt < w , we also have that
e = M ( s ) . I@TW (3.30) e l / ( l + $, T,$) E L2. Further, since
with the convention (standard in the adaptive control literature)
(3.39)
that the hybrid notation (3.30) refers to the convolution between
the inverse Laplace transform of M ( s ) and a T W . Also the
exponentially decaying initial condition terms are dropped since we have that 4 E L,. Also since
they do not alter the stability proof (for these points and a review
of linear adaptive control, we refer Sastry and Bodson [ 191, or the (3.40)
survey paper Sastry [21]). Define the augmented error
el = e + ( O T M ( s )w-M(s)OTw). (3.31) we see that 4 E L2 (the first term in (3.40) is integrable, see
SASTRY AND ISIDORI: ADAPTIVE CONTROL OF LINEAFUZABLE SYSTEMS 1129

(3.38) above and the second bounded). Finally, define Combining ( 3 . 4 9 , (3.46) and noting that x is a diffeomorphism of
zI, z2, we see that
IIx IIr 5 K I1 W I1r + K
and
The first term is in L2 f l L , and the second bounded. Thus, y is
indeed in L2 fl La. Hence, (3.37) follows. I11IIr 5 K I1 WII, + K - (3.48)
Remarks: The conclusions of Proposition 3.2 are generic Using the facts that 118 W/axll and )Ia W/d8 1) are bounded and
identifier properties. Other identifiers such as the normalized (3.48), we get
least-squares identifier also yield these properties; for details, see
Sastry and Bodson [19]. II rt-IIrsKII WIIr+K (3.49)
We are now ready to state and prove the main theorem.
Theorem 3.3 (Basic Tracking Theorem for SZSO for thus W is regular * E = M W is regular as well (since M is
Relative Degree Greater than I): Consider the control law of stable). A similar calculation yields 9 ' W to be regular as well.
(3.22), (3.23) applied to an exponentially minimum-phase nonlin- For consider,
ear system with parameter uncertainty as given in (3.1), (3.2). d
-
If y, yM, * ,y(Y- I ) are bounded, -si bounded away - (9'W)=Q'W+9TW.
dt
(3.50)
from zero, f, g, %, Lib, L,L:h are Lipschitz continuous
functions of x , and W ( x , 0) has bounded derivatives in x , 8, then Using (3.49) and 9, 4 E L , we get
the parameter update law
(3.51)
(3.36)
But from (3.44), (3.47)
with
II~IIr~KII*'WIIr+K (3.52)
= L - ' ( s )W (3.35)
so that
yields bounded tracking (i.e., x is bounded and y --t y , as t +

03).
II WII sKII@'WIIr+K. (3.53)
Proof: The proof uses three technical lemmas proven in
Sastry-Bodson [ 191 and summarized in the Appendix. Combining (3.53) with (3.51) yields the regularity of a' W . From
Step I-Bound on the Error Augmentation: By the Swap- the regularity of E, 9'W, one can establish that 9't/1 + 11511r
ping Lemma (Lemma 3 of the Appendix) with M (playing the role has bounded derivative and so is uniformly continuous. Since by
of W )= c(sZ - A ) - l b (3.37)

9'MW- M9'W= - c ( s Z - A ) - ' { [ ( s Z - A ) - ' b W T ] Q } .(3.41)


Using the fact that 4 E L2 and that (sZ - A ) - I b is stable (since
M is stable), we get we see that it in fact goes to zero as t -+ 00. (A uniformly
continuous L I function tends to zero as t + 03.) Thus,
1 [(sz- A )- I b W T ]Q I Iy )I W 11 + y . (3.42)
I+'t(t)l s P ( 1 + lltll,). (3.54)
Now using Lemma 2 of the Appendix and the fact that c(sZ -
A ) - ' is strictly proper, we get Step 3-Final Estimates:

(9'MW-M9'WI Wllr+P. (3.43) e = e l + a T L - 'W - L - ' 9 T W


Step 2-Regularity of W, 9 W: Note that the differential is the equation relating the true output error to the augmented
equation for error. Using (3.43) we get
ZI = ( y , y , . . ., y'Y- 1))' le1 5 (el I +PI1 Wllt+P.
is Using (3.53) we get
le1 5 le11 +PII+.'Wllr+P. (3.55)

Apply the BOB1 lemma (Lemma 1) of the Appendix to


e =M ( s ) 9 W
Since 9 is bounded and y, * * ,y k - I ) are bounded by hypothesis
and s'M(s) are all proper stable transfer functions, we have that along with the established regularity of 9 W to get '
I I Z l l l r 4 WII,+K. (3.45) I19'WIIrsKle( + K . (3.56)
Using (3.45) in the exponentially minimum-phase zero dynamics Using (3.56) in (3.55) we get
z 2 = 11.(21, z2) (3.46) lel<lell+Pllell,+P. (3.57)

Using (3.54) for el = aT5,we get


le1 ~Pllellt+P+PIItllr. (3.58)
1130 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 34, NO. 11, NOVEMBER 1989

[ is related to W by stable filtering. Hence, linearizing control laws for a class of continuous-time systems
decouplable by static state feedback. The extension to continuous-
11m=~l1 WIIf+K. (3.59) time systems not decouplable by static state feedback is not as
obvious for two reasons.
Using the estimate (3.53) followed by (3.56) in (3.55) we see that i) The different matrices involved in the development of the
(3.58) may be written as control laws in this case depend in extremely complicated fashion
on the unknown parameters.
lel~Pllellf+P. (3.60) ii) While the true A ( x ) may have rank q < p , its estimate
Since L3 + 0 as t -+ 00 we see from (3.60) that e goes to zero as t
d(x) during the course of adaptation may well be full rank in
--* W. This in turn can be verified to yield bounded W , x , etc.
which case the procedure of Section U-A cannot be followed.
Remarks: The discrete-time and sampled-data case are also not obvious
1) The parameter update law appears not to take into account for similar reasons.
prior parameter information such as the mutual existence of Oi, i) The nonadaptive theory, as discussed in Monaco, Normand-
Oj, and so on. It is important, however, to note that-&: best Cyrot, and Stornelli [ 111 is fairly complicated since
estimate of OiOj in the transient period may not be OiOj. If,
however, the parameters are close to their correct values, such Y k + l = h O (f(Xk)+&?(Xk)Uk) (4.1)
information is useful. But, since parameter convergence is not
guaranteed in the Proof of Theorem 3.3, it may$$ be a good idea is not linear in uk in the discrete-time case and a formal series for
to constrain the estimate of OiO, to be close to OiOj. (4.1) in uk needs to be obtained (and inverted!) for the
The preceding remarks are not designed, however, to amelio- linearization. Consequently, the parametric dependence of the
rate completely ones concerns that the number of parameters control law is complex.
increases very rapidly with y. ii) The notions of zero-dynamics are not as yet completely
2) In several problems it turns out that L;h and L,L;-h developed. Further, even in the linear case, the zeros of a sampled
depend linearly on some unknown parameters. It is then clear that system can be outside the unit disk even when the continuous-time
the development of the previous theorem can be carried through. system is minimum-phase and the sampling is fast enough
3) Thus far, we have only assumed parameter uncertainty in f (Astrom, Hagander, Stemby [ 11).
and g ; but not in h. It is not hard to see that if h depends linearly Thus, we feel that the present contribution is only a first step in
on unknown parameters, then we can mimic the aforementioned the development of a comprehensive theory of adaptive control
procedure quite easily. for linearizable systems.
4) Parameter convergence can be guaranteed in Theorem 3.3 if
W is sufficiently rich in the sense stated after Theorem 3.1. APPENDIX
5) If some of the 0;s are known, they can be replaced in the TECHNICAL LEMMAS
algorithm by these true values and adaptation for them turned off.
6) The parameter update law is a state feedback law as before. We state three important lemmas which are proven in Sastry-
Bodson [19]. The notation y, /3, K from the text is widely used in
C. Adaptive Control of MIMO Systems Decouplable by Static the Appendix.
State Feedback Lemma 1 (BOBI Stability):
Let y = H(s)u be the output of a proper, minimum-phase
From the preceding discussion it is easy to see how the linear system with input U. If U, zi E L,,, and U is regular, i.e.,
linearizing, decoupling static state feedback control law for square
systems (minimum-phase) can be made adaptive-by replacing
the control law of (2.12) by
Then

1= I l u l l f ~ K l l Y l l f + ~ .
Remark: If the input is regular and the plant is minimum-
phase, then bounded system output implies bounded system input.
Note that if A ( x ) is invertible, then the linearizing control law is Lemma 2: Let y = H(s)u be the output of a proper stable
also the decoupling control law. Thus, if A ( x ) and the L;ihi system H(s) driven by U.
depend linearly on certain unknown parameters, the schemes of
the previous sections (those of Section 111-A if yl = y2 = . * = lf
y, = 1 and those of Section III-B in other cases) can be readily
adapted. The details are more notationally cumbersome than
II U IIf S Y (IQ111+ Y9

insightful. We refer the interested reader to Craig, Hsu, and then


Sastry [5] for the application of these ideas to the adaptive control
of rigid link robot manipulators with y i = 2 for all i = 1, * ,p . II Y SY114 III + Y-
111
We end this section with two remarks.
1) Adaptive control of square multivariable nonlinear systems I . in addition H is strictly proper
decouplable by static-state feedback is straightforward; it is,
however, important to have the linearizing control depend linearly lY I f 5 PllQ111+ P.
on parameters. Remark: This is a slight generalization of several standard
2) Adaptive control of nonlinear systems not decouplable by results; note that if H is strictly proper we get a L2 fl L , function
static state feedback is not easy or obvious; some of the reasons which goes to zero as the bound.
are also discussed in the next section. Lemma 3 (Swapping Lemma): If H(s) = c(sZ - A ) - b + d
is the minimal realization of a proper transfer function, then
IV. CONCLUDING
REMARKS
H(s)( WT*) - ( H ( s )WT)*= - c(sZ-A)-{ [(sI-A)-lbWTl&).
We have presented some initial results on the use of parameter
adaptive control for obtaining asymptotically exact cancellation in
SASTRY AND ISIDOIU: ADAPTIVE CONTROL OF LINEAIUZABLE SYSTEMS 1131

ACKNOWLEDGMENT decoupling via feedback: A differential geometric approach, IEEE


Trans. Automat. Contr., vol. AC-26, pp. 331-345, 1981.
The authors would like to acknowledge several insightful 1191 S. S. Sastry and M. Bodson, Adaptive Control: Stability, Con-
discussions with Dr. D. Normand-Cyrot of the Laboratoire des vergene and Robustness, Englewood Cliffs, NJ: Prentice-Hall,
1989.
Signaux et Systemes, Gif Sur Yvette, France. 1201 D. Tavlor. P. Kokotovic. and R. Marino. Nonlinear adaotive control
with ~nmodelleddynam& preprint, Univ. Illinois, De;. 1987.
S. S. Sastry, Model reference adaptive control: Stability and rameper
REFERENCES convergence and robustness, IMA J. Math. Contr. Inform., vol. 1 ,
K. J. Astrom, P. Hagander, and J. Sternby, Zeros of sampled pp. 27-66, 1984.
systems, Automatica, vol. 20, pp. 31-38, 1985. W. A. Porter, Diagonalization and inverses for nonlinear systems,
M. Bodson and S. Sastry, Small signal 110 stability of nonlinear Int. J. Contr., vol. 21, pp. 67-76, 1970.
control systems, unpublished, Univ. Calif., Berkeley, Memo UCB/ R. W. Brockett, Feedback invariants for nonlinear systems, in
ERL M84/70, 1984. R o c . 6th IFAC Congress, Helsinki, 1978, pp. 1 1 15-1 120.
C. Byrnes and A. Isidori, A frequency domain philosophy for B. Jakubczyk and W . Respondek, On the linearization of control
nonlinear systems with applications to stabilization and adaptive systems, Bull Acad. Polonaise Sci. Ser. Sci. Math., vol. 28, pp.
control, in Proc. IEEE Conf. Decision Contr., Las Vegas, NV, 517-522, 1980.
1984, pp. 1569-1573. L. R. Hunt, R. Su, and G. Meyer, Design for multi-input nonlinear
D. Claude, Everything you always wanted to know about lineariza- systems, in Differential Geometric Control Theory, R. W .
tion, in Algebraic and Geometric Methods in Nonlinear Control Brockett, R. S. Millman, and H. Sussmann, MS. Boston:
Theory, Fliess and Hazewinkel, Eds. Dordrecht, The Netherlands: Brikhauser, 1983, pp. 268-298.
Riedel, 1986. A. Isidori, Nonlinear Control Systems: An Introduction (Lecture
J. Craig, P. Hsu, and S. Sastry, Adaptive control of mechanical Notes in Control and Information Sciences, Vol. 72). New York:
manipulators, Int. J. Robotics Res., vol. 6, Summer 1987. Springer-Verlag, 1985.
J. Descusse and C . H. Moog, Decoupling with dynamic compensa- B. Jakubczyk, Feedback linearization of discrete time systems,
tion for strongly invertible affine nonlinear systems, Int. J. Contr., Syst. Contr. Lett., vol. 9, pp. 411-416, 1987.
no. 42, pp. 1387-1398, 1985. C. Byrnes and A. Isidori, Local stabilization of minimum phase
W. Hahn, Stability of Motion. New York: Springer-Verlag, 1967. nonlinear systems, Syst. Contr. Lett., vol. 10, 1987.
A. Isidori, Control of nonlinear systems via dynamic state feedback, C. Byrnes and A. Isidori, Analysis and design of nonlinear feedback
in Algebraic and Geometric Methods in Nonlinear Control The- systems. Part I: Zero dynamics and global normal forms, preprint.
ory. Dordrecht, The Netherlands: Riedel, 1986.
A. Isidori and C. H. Moog, On the nonlinear equivalent of the notion
of transmission zeros, in Modeling and Adaptive Control, C.
Byrnes and A. Kurszanski, Eds. (Springer-Verlag, Lecture Notes in
S. Shankar Sastry (S79-M80), for a photograph and biography, see p. 404
Information and Control). New York: Springer-Verlag, to be pub-
lished. of the April 1989 issue of this TRANSACTIONS.
G. Meyer and L. Cicolani, Applications of nonlinear system inverses
to automatic flight control design-System concepts and flight evalua-
tions, in AGARDo-graph 251 on Theory and Applications of
Optional Control in Aerospace Systems, P. Kent, Ed. NATO,
1980. Alberto Isidori (M80-SM85-F87) was born in
S. Monaco and D. Normand-Cyrot, Nonlinear systems in discrete Rapallo, Italy, on January 24, 1942. He received
time, in Algebraic and Geometric Methods in Nonlinear Control I the doctor degree in electrical engineering from the
Theory, Fliess and Hazewinkel, Eds. Dordrecht, The Netherlands: University of Rome, Rome, Italy, in 1965.
Riedel, 1986. From 1967 to 1975 he held various teaching and
S. Monaco, D. Normand-Cyrot, and S. Stornelli, On the linearizing research positions at the University of Rome, where
feedback in nonlinear sampled data control schemes, in Proc. IEEE
Conf. Decision Contr., Athens, 1986, pp, 2056-2060. since 1975 he has been Professor of Automatic
K. Nam and A. Arapostathis, A model reference adaptive control Control. He has held visiting positions at the
scheme for pure feedback nonlinear systems, preprint, Univ. Texas, University of Florida, Washington University, the
Austin, Dec. 1986. University of California, Davis, Arizona State
R. Marino and S. Nicosia, Singular perturbation techniques in the University, the University of Illinois, and the
adaptive control of elastic robots, in Proc. IFAC-SYROCO Work- University of California, Berkeley. His research interests include systems and
shop, Barcelona, Nov. 1985, pp. 11-16. control theory, with emphasis on nonlinear feedback systems, geometric
S. Nicosia and P. Tomei, Model reference adaptive control for rigid control theory, and stabilization.
robots, Automatica, vol. 20, pp. 635-644, 1984.
S. N. Singh and W. J. Rugh, Decoupling in a class of nonlinear Dr. Isidori is an Associate Editor of Applied Stochastics Models and
systems by state variable feedback, Trans. ASME J. Dyn. Syst. Data Analysis, and of Mathematics of Control, Signals and Systems. He is
Measur. Contr., vol. 94, pp. 323-324, 1972. the author of the Lecture Notes Nonlinear Control Systems: An
E. Freund, The structure of decoupled nonlinear systems, Int. J. Introduction. He is also Vice Chairman of IFACs Technical Committee on
Contr., vol. 21, pp. 651-654, 1975. Mathematics of Control. In 1981 he and coauthors received the IEEE Control
A. Isidori, A. J. Krener, C. Gori Giorgi, and S. Monaco, Nonlinear Systems Societys Outstanding Paper Award.

You might also like