Professional Documents
Culture Documents
IEEE TRANSACl'IONS
ON CONTROL, VOL. AC-24, NO. 1, FEBRUARY 1979
I. INTRODUCTION
x(t+l)=A+(t)+B,u(t)+~(t)
LINEARSTATE
111. THEALGORITHM FOR A GENERAL
Y ( f )= CdcW + (2.1 ) MODEL
where u(t), y(t), and x ( t ) are vectors of dimensions nu, nu,A. n e Model
and n,, respectively. The sequences { u ( t ) } and {e(t)}
consist of independent random vectors with zero-means In order to determine the system that has generated the
and covariances observed input-output data, we assume the following
38 EEE TRANSACXIONS ON AUTOMATIC CONTROL, VOL AC-24,NO. 1, FEBRUARY 1979
where
where
C ( O ) x ( th) .( z ( t ) ) = (3.6)
We are consequently faced with a nonlinear filtering prob-
lem and if this is attacked by the EKF (1.2)-( 1.5) we
obtain Introduce also the natural block structure
W G : FXTENDED KALMAN FILTER 39
Note that (3.19) with the aid of (3.17) also can be written
as where
P3(t+l)=[{P3(t)-L(t)S2LT(t)}-1+6~]-1, q ( t ; s ) = [c(e)w(f;e)+~(e,~(t;e))]'.
(4.5)
(3*20) Again, by comparing (4.5) with (3.17), having in mind that
for Some small positive 6. Notice that for small 8 (3.20') is P 2 ~ F ( t ~ 8 ) F 3 9 we see that the L-PrOcess that (3.17)
approximately given by produce for constant 8 and F3 would be F3F(t,8)9-'(8).
Equations (4.1)-(4.5)now define the processes J(t;f?)
. Z(t; 8 ) uniquely, for the given 8, from y(s),u(s); s Q t .
P 3 ( t + 1 ) = ~ 3 ( t ) - ~ ( t ) S 1 ~ T ( t ) - 6 ~ 3 ( t ) ~ 3 ( t )and
40 IEEE TRANSACTIONS ON AUTOMATIC COWROL, VOL. AC-24, NO. 1, FEBRUARY 1979
(Take the initial conditions z(0;8), F(O,f?) such that the effectively reduced to stability analysis of the differential
processes become stationary.) Since, according to (3.15), equation (4.8). The next section is devoted to such analy-
L_(t)[y(t)_-C,i(t)]
P3+(t,8)S-I(@,
updates 6 and since L ,
it seems reasonable that this update,
- Isis.
The local convergence properties of the algorithm are and, evaluating the expected value using complexintegrals
according to Lemma 4.1 :2)associated with stability of the
linearized differential equation (4.8) around the stationary E Z ( ~a)c(t;
; a ) f(a)
point in question. In [ 181 the linearization is carried out.
There does not seem to be any guarantee that the lin-
earized equation should be stable for all systems. There-
fore, the possible lack of convergence of the EKF method 1- a / z
does not necessarily have to be caused by large parameter
l-(u-K(u))/z
deviations from the true values, as is sometimes claimed.
x(t+ l)=aox(t)+uo(t)
A t ) = 4 0+ eo(0 (5.4)
where x and y are scalars and
The integral (5.8) with (5.9) iseasy to evaluate using
residue calculus which gives
-a(r)=R-l(r)S-’(a(r))~(a(T)). (5.11)
dr
However,since a(r) is scalar and R - ’ ( T ) S - ’ ( ~ ( T is
)) a
positive scalar, the stability properties of(5.11) are en-
tirely determined by the sign changes of Ria). Sketches of
this function are shown in Fig. 1 for some choices of a,
and A.
Several conclusions canbedrawnfromthis
Remember that the convergence properties of the EKF-
figure. t t <OJ
VII. A MODIFIED
ALGORITHM FV(t+i;e)=[~(e)-K(e)c(e)]W(t;e)
Guided by the results of theprevioussection, it is +M(e,z(t;e),u(t))+[ddB -~ ( e ) ] q t ; e )
natural to interpret the EKF as an attempt to minimize
the expectedvalue of thesquaredresiduals associated -K(e)D(e,Z(t; e)); (7.6)
with model 0. Let us pursue this idea further. Let @e),
g(e),f ( t ; e), and Z(t; 0 ) be given by (4.1)-(4.3). A suitable q ( t ; e ) = [c(e)~~(t;e>+o(e,l(t;e))]'.
(7.7)
criterion to seek to minimize would be
Compare with (4.4),(4.5)!Wesee that q(t;0 ) is"al-
v(e)=slc(t,e)12. most" equal to A t ; e). It is just a term [d/dOK(O)]c(t;0 )
(7.1)
that should be included in M(O,$(t;B),u(t)) in order to
[Expectation is here over {e(t),u(t),~ ( t ) } ] . make the EKF-algorithm a (stochastic) descent-algorithm.
A reasonable adjustment scheme to achieve minimiza- Furthermore the T-'(O)-term in (4.6) should be deleted.
tion of (7.1) should be related to the gradient of V(0).We As a result of this heuristic discussion we might expect
have, allowing ourselvesto interchange differentiation and improved asymptotic convergence properties of the EKF
mathematical expectation, if (an approximation of) [ d / d e K ( ( 8 ) ] , = ~ ~ Owhere
~ ( t ) ,E ( ? ) is
the current residual,is added to the Mf matrix in the
scheme (3.14)-(3.20).
There are several ways to achieve this.One that is quite
Denote the ne)ny matrix well-suited for practical computations is treated in the
next section. Another would be to calculate the derivative
d -
-
---cT(t;e)=q(t;e).
dB (7.2) d/deiE((e)from (4.1) and evaluate it at d(t). Since the gain
K ( t ) is calculated by means of the Riccati equation (3.16),
Then the negative gradient of V ( 0 )(a column vector) can (3.18), it is natural to find an expression for the derivative
be written from the sensitivity equations. Let K? be defined by
One prefer
would that, asymptotically, the parameter a a
values 8 be corrected in this direction, or in a modified +APl(t)- C'(e> + Q'(@)],_L,
(e.g.,"Newton")negative gradient direction. Compare
(4.6)!
with The asymptotic direction of modification for .sr-'-K(t)S,-'Uj')Sr-'; (7.8a)
44 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL AC-24,NO. 1, FEBRUARY 1979
-.1 -
d2
V(8)=Eij(t;8)qT(t;8)
2 de2
7.1. Then the conclusion of this Theorem holds with the K, = K(e(t)).
function V(8) replaced by V , ( d ) defined by (7.14).
From (3.14)-(3.20') we now obtain the algorithm
with
qt;e)= [c o ( ~ ~ - ~ o + ~ o c o ) - l ~ ,
E e ( t ) ~ ~ ( s ) = A 6 , , x(O)=O (8.2)
-C (B)(~~-A(B)+~(B)C(~))-'B(B)]~(
instead of (3.1), (3.2). [In fact (8.1), (8.2) is just a special
case of (3.1), (3.2) with + [ co(q~-Ao+Koco)-~Ko
Q'(8) = F(8)AKT(8);
- c(e)(4r-A(s)+a(e)c(e))-'K(e)]Y(t)
Q c ( 8 )= K(8)Ay
+ (8.13)
Qye) = A
[ q is the forward shift operator].
and II(8) =0.1 Furthermore, among isolated stationary points of V(8)
Ifwe are going to apply the EKF-idea to the model onlylocalminima are possibleconvergence points. 0
(8.1), where E(8) is explicitly parameterized, it is easy to Proofi The proofis entirely analogous tothat of
include Theorem 6.1.
a - we If derive
would a Newton-type stochastic gradient
jJK(W algorithm for the minimization of V3(8)without
relying
upon the EKF, we would get an algorithm of the follow-
in the cross-coupling term M as discussed in Section VII. ing type, cf. [191-[221:
Hence, define
a(r+l)=A,~(t)+B,u(t)+K,€(t)
a()A=( e ) a + B ( e ) u + K ( B ) r ) l
M ( B , ~ , ~ , ~-a6 e=e . (8.3) E ( t ) = y ( t ) - C,2(t)
e(t+l)=e(t)+R-'(t)+(t)A-L(t)
Introduce short +(t)=wT(r)C:+DT (8.14)
M,=~(e(t),a(t),u(f),€(f)) (8.4) w(t+l)=(A,-~C,)w(t)+M,-K,D,
and R(t+l)=R(t)++(t)R-'$~~(t)+61.
46 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL AC-24,NO. 1, FEBRUARY 1979
This could be called a recursive prediction error algo- for (8.17) we find
rithm. It is easy to see that the differential equation
associated with(8.14)is the same as theone associated F= - 1 E d [ ( = ( t ; 6 ) A - ' ~ ( t ; 6 ) ] R - t f ( 6 , A )
with
(8.5)-(8.10).
These
two algorithms therefore have the 2 d6
same asymptotic convergence properties. They differ in - - E1 ~ T ( t ; 8 ) A - ' ( H ( 6 ) - A ) A - ' r ( t ; e )
fact only in the manner R -'(t)+(t) is calculated, and this 2
differenceofis a transient character. A further discussion 1
of this point canbefound in [18],
[21],
[22]. The question + 2 tr[A-'(H(B)-A)]
of which of the two algorithms exhibits the best transient = - f T ( 6 , A ) R -'f(O,A)
convergence behavior must be left to simulation studies.
If also the covariance matrix A of the innovations is to -- 1 trA-'(H(e)-A)n-'(H(e)-A)
be estimated it is natural to parameterize it independently 2
and minimize <O
1 '
- 2 E ( k ) P ( k ) f ii(t).
t l
and
But itisproved in [24] that this function has only one
1 d stationary point, namely a = a,, k = k,. Consequently, the
f(6,A)= - - E - [ C e T ( t ; 6 ) A - ' ~ ( t ; 6 ) ] .
2 d6 modified algorithm yields estimates that converge
with
probability 1 to the true values. This holds irrespective of
With the Lyapunov-function
the value h in (5.5), and we have thus solve the bias
problems as well as the convergence problems of Section
21 1
v,(e,A)=-E~'(r;B)A-'~(t;8)+;!iogdetA v-c.
UUNG: EXTENDED KALMAN FILTER 47
Wemay also note that the same resultholds for a analysis has illuminated, in an essential way, the possible
general AkMA-process causes of divergence and bias,
In short, the reason for divergence can be traced to the
y(t)+aly(t-l)+..- +aJ(t-n) fact that the effect on the Kalman gain K of a change in 8
=e(t)+c,E(t-I)+ - - e +c,E(t-n) (8.19) is not taken care of. This lack of coupling between K(t)
and 8 in the algorithm may, as demonstrated in Section V,
for which we use the state model lead to divergence of the estimates, even for simple cases,
where only system parameters are unknown. This inter-
-al 1 0 0 pretation is consistent with the observations in practical
0 1 0 applications [25], that the behavior of the EKF often is
-a2 worse when the residuals are large and/or the inputs are
small. In such cases the coupling term obviously becomes
more important.
Also, for cases wherethe steady-state Kalman gain does
not depend on 8, like for deterministic models, we will
have good convergence properties. This was demonstrated
in Section VI.
We have also found the remedy for the general case. If
we include (an approximation of) a term
(8.20)
where k, corresponds to c, - a, or
.. . in the coupling term M, as described in Sections VI1 and
-a1 ..* - an VIII, global convergenceresults are obtained, and the
1 0 ... 0 procedure can be interpreted as a minimization of the
0 1 ... 0 prediction error associated with model 8; IF(t; @I 2 . Inclu-
x ( t + 1)=
sion of a term like (9.1) is of course particularly simple if
the model is given in the innovations form (8.1) with K(8)
0 0 *.- 1 0 explicitly parameterized. However, it could be done also
for the general case (3.1), (3.2) as shown in Section VII. In
1 practical applications of the EKF, “manual” adjustments
0 of the noise covariances are often used to make the
+ algorithm work. This is the “tuning of the filter.” The
inclusion of parameters associated with the Kalman gain
0 can thus be interpreted as automatic tuning of the filter.
Also, the analysis has shown that it is the innmatiom
representationform rather than the original state-space form
that should be the basis for the linearization procedurein the
Application of the modified EKF-algorithm (8.5)-(8.10) EKF.
to (8.20) or (8.21) consequently yields convergencealmost The convergenceanalysis of the EKF using thedif-
surely to the true parameter values of (8.19). (The result of ferential equation approach has thus given two different
[24] shows that all stationary points of V,(8), for the contributions. Results about the asymptotic behavior of
model (8.20) or (8.21) give a correct description of (8.19).) the usual EKF algorithm that appear to be new have been
This algorithm therefore isverypowerfulformodeling obtained and discussed in Sections V and VI. In addition,
time series. a modification of the algorithm has been suggested that
yields considerably improved convergence properties. The
IX. CONCLUSIONS convergence properties of the modified algorithm are in
fact equally good as those of off-line prediction error
The recursive parameter estimation problem for linear identification methods, e.g. 1261-[28] (and maximum like-
systems is inherently a nonlinear filtering problem, and as lihood methods). Notice that in the convergence results of
pointed out in [12] there are in principle no differences Theorems 6.1-8.1 itis not necessary to assume that the
between parameter estimation and state estimation. For true system can be described within the modelparameteri-
the nonlinear filtering problem, approximative techniques zation. Convergence to a local minimum of the variance
have to be used, and as remarked in [12], an essential of the associated residuals is still obtained, and this result
difficulty with all approximation techniques is to establish coincideswith the general results on prediction error
convergence. Indeed, much of the workassociatedwith methods, [27].If an exact description is possible, then it
nonlinear filtering concerns divergence problems. For the can always be checkedwhether the local minimumin
specific method under discussion here, the convergence which the algorithms stops is a global one (i.e., give the
48 IEEE TRANSACTIONS ON AUTOMATIC CONTROLNO.
VOL. AC-24, 1, FEBRUARY 1979
true description of the system) by performing a residual We can now rewrite (3.15) and (3.20) as
test. For some particular parameterizations it is (z priori
known that all stationary points indeed are global b(t + I ) = s(t>+ f L(t)[ y ( t ) - c,a(t)1 (A.3a)
minima. Among these structures probably ARMA-models
of time series are the most important ones. F;I(t+l)=F;I(t)
Thediscussion has throughoutbeen for the case of
estimating parameters that are known to be constant. This +-1 [ P-;yt)L(t)q[ Q,+ C,
is of course inherent in convergence analysis. It is reason- t+ 1
able to assume that the analysis is relevant also for the
case of slowly drifting parameters. Then some small pro- ( f
x p l ( t )- ~ ~ ( t ) ~ ; l ( r ) ~ -CT]
~ ( t- 1) )
cessnoisewouldbe added to the extended halfof the
state vector and the gain L ( t ) would then not tend to zero
but to a "small" value. For the genuine nonlinear filtering
problem this situation corresponds to the case where some These two quantities correspond to the estimate vector
1
x S l L " ( t ) ~ ~ 1 ( t ) + 6 Z - ~ ~. ' ( t (A.3b)
)
modes are considerably slower than the other ones. in the general recursive algorithm of [ 131. Let this estimate
Finally, it is clear that the EKF and itsmodified vector ( ( t ) (in [ 131 called x ( t ) ) be
versionshave obvious relationships with other suggested
recursiveparameter estimation schemes,[4].These con-
nections are discussed in [ 181 and [21]. In fact, the mod-
ified algorithm of Section VI11 should be regarded as a
recursive prediction error (RPE) algorithm, where the way and let the observation vector of [ 131 be
of calculating P3 is inspired by the EKF. For practical
implementations of the algorithms discussed here,
it
should be noted that in order to achieve acceptable con-
vergence rates somemechanism that gives a somewhat where "Col" denotes some way to convert a matrix to a
slower decrease of the gain L(t) must be introduced. Some column vector. The updating of the &vector is given by
ways to do this are discussed in [4].
(A-3)
E ( t + l ) = t ( t ) + fQ(t'(t),rp(t),S,) (A.4)
APPENDIX I
PROOF OF LEMMA4.1 with a slightly complex, but straightforward definition of
Q ( - ,-,-). Moreover, from (3.19') and (3.14) we have
In this Appendix we will have to make frequent refer-
ences to [ 131, which is assumed to be available.
In order to verify that (3.14)-(3.20') satisfies the condi-
tions of Theorems 1 and 2 in [ 131,we shall first give the
algorithm in an asymptotic form, where certain terms
I A, - K(t)C,
q ( t + 1) = - - - - - - - - - -
A,-K(t)C,
5(B(t),K(t),F3(t),t)
+i - - -0 - - -
;
tending to zero are sorted out.
From the matrix inversion lemma (3.20) can be rewrit-
ten as where the matrix {(., -, .) is obtained from (3.1Y). The
1,
xs,-'L~(t)P;I(t)+8z.
Its stability properties obviously coincide with those of
Since the expressionwithin brackets ispositive definite, A ( @ ) - K(e)C(e)for a constant e. This matrix is stable if
we find that e E 0,.
To verify the B-conditions of [13]we note that the
PC I ( t ) >&I. (A. 1) quadratic structure of Q( ., .,- ) and the assumed differen-
Therefore, introduce tiability of C(B), D(B,a),etc. assure that B.3 holds. As the
Lipschitz constant we may take
F3(t ) = t*P3(t). (A.2a)
K,(~,cp,K,t')=CM(IeI+5)(1+Icpl+t')2(1+1P-331+5)
From (3.19') it follows that P2(t) isof the same order of
magnitude as P3(t), as long as [ A ,-K(t)C,] is stable. where
Therefore, introduce also
F2(t ) = t*P2(t ) (A.2b)
Z( t ) = t.L(t). (A.2c)
With K, as above, clearly also B.4 holds. Since all
WUNG: EXTENDED gALMAN FfLTER 49
and Step 3c of [13] is still valid. (interchanging differentiation and expectation) where
A similar problem arises from the fact that in (A.4) S, V(B) is defined in the theorem. Along a solution O(7) to
enters, which is not a direct function of &r). However, for (4.8) we have
large t, St - S(d(t))will be small and bounded by the same
quantities as in (A@, (A.7).
It nowremainsonly to show that the associated dif-
ferential equation in fact is(4.8). With $, fixed to the
values 0 and F;:, we find that the upper part of the = - 3.fT(8(.r))R
1 -'(r)f(Q(r)).
pvector will be 2(t;B), givenby(4.2),(4.3).Similarily,
comparing (3.19') with (4.4) we find that the lower part of Since R - ' ( r ) is a positive definite matrix, the function V
50 EEE TFWNSACTKONS ON AUTOMATIC CONTROL, VOL AC-24,NO. 1, FEBRUARY 1979
is a Lyapunov function for (4.8), i.e., it is decreasing H. Cox: “On the estimation of state Variables and parameters for
noisy dynamic systems,” ZEEE Trans. Automat. Contr., vol. AG9,
outside the set pp. 5-12, Feb. 1964.
[71 k P. Sage and C. D. Wakefield, “Maximum likelihood identifica-
tion of timevarying and random systemparameters,” Znt. J.
Contr., vol. 16, no. 1, pp. 81-100, 1972.
V.P. Leung and L. Padmanabhan, “Improvedestimation a l g e
which coincides with the set of stationary points of (5.1). rithms using smoothingand relinearization,” ChemEng. J., vol. 5,
As long as 6(<r)remains in D,, we have consequently pp. 197-208,1973.
L. W.Nelson and E. Stear, “The simultaneous on-line estimation
shown that the estimate 8(t) convergeswith probability of parameters and states in linear systems,” ZEEE Trans. Automat.
one to the set 0,.In fact, using result 2) of Lemma 4.1, it Conrr., vol. AC-21, pp. 94-98, Feb. 1976.
J. B. Farison, R. E. Graham, and R C.Shelton, “Identification
follows that among the isolated points in D, only the local and control of lineardiscretesystems,” ZEEE Trans. Automat.
minima of V ( 8 ) are possible convergence points of { O ( t ) } . Contr., vol AC-12, pp. 4 3 8 4 2 , Aug. 1967.
D. G. %hac, M. Athaas, J. Speyer, and P. K. Houpt, “Dynamic
We now turn to the question of the effect of projecting stochashc control of freeway corridor systems,Volume IV-
the estimates &t) into the set D,= {BIA(B) stable}. Since Estimation of traffic variables
via
extended Kalman filter
methods,” MIT, Cambridge, MA, Rep. ESL-R-611, Sept. 1975.
the function V ( 6 ) tends to infinity as 0 approaches the K. J. Astrom and P Eykhoff,“System Identification-A survey,’’
boundary of D,, the trajectories of(5.1)which point Automatica, vol. 7, pp. 123-162, 1971.
L.Ljung, “Analysis of recursive, stochastic algorithms,” ZEEE
“downhill” cannot cross the boundary. Therefore, Lemma Tram. Automat. Contr., vol. AC-22, pp. 551-575, Aug. 1977.
4.1:3) implies convergence of the estimates to 0,. Finally, A. H. Jazwinski, “Adaptivefiltering,” Automatica, vol.5, pp.
475-485,1969.
we shallsomewhatcommentupon the set 0,. Suppose R. K. Mehra, “On theidentification of variances and adaptive
that there exists a @,E0,such that (6.3) holds that is, 8, Kalman filtering,” IEEE Tram. Automat. Contr., vol.AC-15, pp.
175-184,Apr.1970.
givesthe true input-output transfer function for the R. F. Ohap and A. R Stubberud, “Adaptive minimum variance
model. From the representation (2.10) of the true system estimation in discrete-time linear systems,”in Control and Dynamic
System, Aaimces in Theory and Applications, vol. 12. New York:
we find that Academic, 1976, pp. 583-624.
P. R. Bilanger, “Estimation of noisecovariancematrices for a
V(t)=C,(ql-A,)-’B,u(t) linear time-varying stochastic process,” Automatica, vol. 10, pp.
267-277,1974.
L. Ljung, “The extended Kalman filter as a parameter estimator
+ C,(ql-A,)-lKOEO(t)+Eo(t) for linear systems,” Dep. Elec. Eng. Linkoping Univ., Linkopiug,
Sweden, LiTH-ISY-1-0154, May 1977.
= c(e,)a(t; e,) T. Siiderstrom, “An on-line algorithm for approximate maximum
likelihood identification of linear dynamic systems,”Div.Aut&
mat. Contr., Lund Inst. of Tech., Lund, Sweden, Rep. 7308, 1973.
+(Co(qz-A,)-’K,+I)EO(t) L. Ljung, “Some basic ideas in recursive identification,” Jmrnees
d’Auromatique, IRISA, Rennes, France, Sept 1977; IRISA Publica-
where the last equality follows from (6.1). We then have tion Interne, No. 92, pp. 36-49.
-, “On recursive prediction error identification algorithms,”
Dep. Hec. h g . , Linkoping Univ., Sweden, Rep. LiTH-ISY-1-0226,
q t , q = c(e,)Z(t;
e,) - c(e)i(t;e ) Aug.1978.
J. B. Moore and H. Weiss, “Recursive prediction error methods for
adaptive estimation,” Preprint, Dep. Elec. Eng.,Univ. Newcastle,
+ ( C ~ ( ~ l - A o ) - ’ ~ o + Z ) ~ , (B.1)
(t). N.S.W., Australia, Feb. 1978.
H. Akaike, “Maximum likelihoodidentification of Gaussian auto-
The estimate a ( t ; e )is formed entirely from u(s), s<t, regressive moving average models,” Biometrika, vol. 60, pp. 255-
and if the system operates in openloop (u(t) independent 265, 19.73.
K. J. Astrom and T. Soderstrom, “Uniqueness of the maximum
of c0(s), e,(s), all s), then the first two terms in (B.l) are likelihood estimatesof the parameters of an ARMA model,” ZEEE
independent of the last one. This means that So gives the Tram. Automat. Contr., vol. AC-19, pp. 768-774, Dec. 1974.
G. Stein, private communication.
globalminimum of the function EE~((t,6)(Qe)E1-(t,e), and L. Ljung,“Prediction error..identification methods,”Dep. Elec.
in particular, that f(0,) = 0. Eng., Linkiiping Univ., Linkoping, Sweden, Rep. LiTH-1SY-0139,
March 1977.
If the system operates in closed loop, where the input is -, “Convergence analysis of parametric identification
partly determinedfrom output feedback ( u ( t ) indepen- methods,” ZEEE Trans. Automcll. Contr., Vol. AC-23, No. 5, 1978.
P. E. Caines, “Prediction error identification methods for
dent of co(s), e,(s), s>t) then we have to require that stationary stochastic processes,” IEEE Trans. Automat. Contr., vol.
E ,= 0 in order to draw the same conclusions. AC-21, pp. 500-506, Aug. 1976.
We have thus concluded that Bo€ 0,. Whether this set
contains more points is a question of the parameteriza-
tion, and has to be studied separately. Lennart Ljung (S74-M75) was born in M h o ,
Sweden, on September 13, 1946. He received the
REFERENCES B.A., MS.,and Ph.D. degrees in 1967, 1970, and
1974,respectively, all fromLundUniversity,
[I] A.H. Jazwinski, Stochastic Processes and Filrering Theory. New Sweden.
York: Academic, 1970. From 1970to1976heheld various teaching
[2] G. N.Saridis,“Comparison of six on-lineidentificationalgo- and researchpositions at the Department of
rithms,” Automatica, vol. 10, no. 1, pp. 69-80, 1974. Automatic Control, Lund Institute of Technol-
[3] R. Isermann, U.Baur,W.Bamberger, P. Kneppo and H. Siebert, ogy, Sweden. In 1972 he spent six months with
“Comparison of six on-line identification and parameter estima- H A the
Laboratory for Adaptive Systems at the In-
tion algorithms,” Automatica, vol. 10. no. 1, pp. 81-104, 1974. stitute for Control Problems (IPU) inMoscow,
[4] T. Soderstrom, L. Ljung, and I. Gustavsson, “A theoretical analy- USSR. In 1974-1975hewas a Research Associate at the Information
sis of recursive identification algorithms,” Automatica, vol. 14, pp. Systems Laboratory at Stanford University.Since 1976he has been a
23 1-244, 1978.
[5] R. E. Kopp and R. J. Orford, “Linear regression applied to system Professor of Automatic Control in the Department of Electrical En-
identification for adaptive control systems,”AZAA J. vol. I, no. IO, gineering, Linkoping University, Linkoping, Sweden. His scientific inter-
pp. 2300-2306,1963. ests include aspects on identification, estimation, and adaptive control.