You are on page 1of 7

Extended Kalman Filter Tutorial

Gabriel A. Terejanu Department of Computer Science and Engineering University at Bualo, Bualo, NY 14260 terejanu@bualo.edu

Dynamic process

Consider the following nonlinear system, described by the dierence equation and the observation model with additive noise: xk = f(xk1 ) + wk1 zk = h(xk ) + vk (1) (2)

The initial state x0 is a random vector with known mean 0 = E[x0 ] and covariance P0 = E[(x0 0 )(x0 0 )T ]. In the following we assume that the random vector wk captures uncertainties in the model and vk denotes the measurement noise. Both are temporally uncorrelated (white noise), zero-mean random sequences with known covariances and both of them are uncorrelated with the initial state x0 . E[wk ] = 0 E[vk ] = 0 E[wk wT ] = Qk k E[vk vT ] = Rk k E[wk wT ] = 0 for k = j j E[vk vT ] = 0 for k = j j E[wk xT ] = 0 for all k 0 E[vk xT ] = 0 for all k 0 (3) (4)

Also the two random vectors wk and vk are uncorrelated: E[wk vT ] = 0 for all k and j j (5)

Vectorial functions f() and h() are assumed to be C 1 functions (the function and its rst derivative are continuous on the given domain). Dimension and description of variables: xk wk zk vk f() h() Qk n1 n1 m1 m1 n1 m1 nn State vector Process noise vector Observation vector Measurement noise vector Process nonlinear vector function Observation nonlinear vector function Process noise covariance matrix

Rk m m Measurement noise covariance matrix

EKF derivation

Assuming the nonlinearities in the dynamic and the observation model are smooth, we can expand f(xk ) and h(xk ) in Taylor Series and approximate this way the forecast and the next estimate of xk . Model Forecast Step Initially, since the only available information is the mean, 0 , and the covariance, P0 , of the initial state then the initial optimal estimate xa and error covariance is: 0 xa = 0 = E[x0 ] 0 P0 = E[(x0 xa )(x0 0 xa )T ] 0 (6) (7)

Assume now that we have an optimal estimate xa E[xk1 |Zk1 ] with Pk1 covariance at time k1 k 1. The predictable part of xk is given by: xf k E[xk |Zk1 ] = E[f(xk1 ) + wk1 |Zk1 ] = E[f(xk1 )|Zk1 ] Expanding f() in Taylor Series about xa we get: k1 f(xk1 ) f(xa ) + Jf (xa )(xk1 xa ) + H.O.T. k1 k1 k1 (9) (8)

where Jf is the Jacobian of f() and the higher order terms (H.O.T.) are considered negligible. Hence, the Extended Kalman Filter is also called the First-Order Filter. The Jacobian is dened as: f1 f1 f1 x1 x2 xn . . .. . . Jf (10) . . .
fn x1 fn x2

fn xn

where f(x) = (f1 (x), f2 (x), . . . , fn (x))T and x = (x1 , x2 , . . . , xn )T . The eq.(9) becomes: f(xk1 ) f(xa ) + Jf (xa )ek1 k1 k1 where ek1 xk1 xa . The expectated value of f(xk1 ) conditioned by Zk1 : k1 E[f(xk1 )|Zk1 ] f(xa ) + Jf (xa )E[ek1 |Zk1 ] k1 k1 where E[ek1 |Zk1 ] = 0. Thus the forecast value of xk is: xf k f(xa ) k1 (13) (12) (11)

Substituting (11) in the forecast error equation results: ef k xk xf k = f(xk1 ) + wk1 f(xa ) k1 Jf (xa )ek1 + wk1 k1 2 (14)

The forecast error covariance is given by: Pf k E[ef (ef )T ] k k = = Jf (xa )E[ek1 eT ]JT (xa ) + k1 k1 f k1 a T a Jf (xk1 )Pk1 Jf (xk1 ) + Qk1 E[wk1 wT ] k1 (15)

Data Assimilation Step At time k we have two pieces of information: the forecast value xf with the covariance Pf and the k k measurement zk with the covariance Rk . Our goal is to approximate the best unbiased estimate, in the least squares sense, xa of xk . One way is to assume the estimate is a linear combination of both k xf and zk [4]. Let: k xa = a + Kk zk k From the unbiasedness condition: 0 = E[xk xa |Zk ] k = = a = Substitute (18) in (16): xa = xf + Kk (zk E[h(xk )|Zk ]) k k (19) E[(xf + ef ) (a + Kk h(xk ) + k k f xk a Kk E[h(xk )|Zk ] xf Kk E[h(xk )|Zk ] k Kk vk )|Zk ] (18) (17) (16)

Following the same steps as in model forecast step, expanding h() in Taylor Series about xf we have: k h(xk ) h(xf ) + Jh (xf )(xk xf ) + H.O.T. k k k (20)

where Jh is the Jacobian of h() and the higher order terms (H.O.T.) are considered negligible. The Jacobian of h() is dened as:
h1 x1

Jh

. . .

h1 x2 hm x2

hm x1

.. .

h1 xn

hm xn

. . .

(21)

where h(x) = (h1 (x), h2 (x), . . . , hm (x))T and x = (x1 , x2 , . . . , xn )T . Taken the expectation on both sides of (20) conditioned by Zk : E[h(xk )|Zk ] h(xf ) + Jh (xf )E[ef |Zk ] k k k where E[ef |Zk ] = 0. Substitute in (19), the state estimate is: k xa xf + Kk (zk h(xf )) k k k (23) (22)

The error in the estimate xa is: k ek xk xa k = f(xk1 ) + wk1 f(xk1 ) f(xa ) k1 xf k Kk (zk h(xf )) k + wk1 Kk (h(xk ) h(xf ) + vk ) k (24)

Jf (xa )ek1 + wk1 Kk (Jh (xf )ef + vk ) k1 k k Jf (xa )ek1 + wk1 Kk Jh (xf )(Jf (xa )ek1 + wk1 ) Kk vk k1 k1 k (I Kk Jh (xf ))Jf (xa )ek1 + (I Kk Jh (xf ))wk1 Kk vk k1 k k Then, the posterior covariance of the new estimate is: Pk E[ek eT ] k = = = (I Kk Jh (xf ))Jf (xa )Pk1 JT (xa )(I Kk Jh (xf ))T k1 f k1 k k f f T T +(I Kk Jh (xk ))Qk1 (I Kk Jh (xk )) + Kk Rk Kk (I Kk Jh (xf ))Pf (I Kk Jh (xf ))T + Kk Rk KT k k k k f f f f f f T T Pk Kk Jh (xk )Pk Pk Jh (xk )Kk + Kk Jh (xk )Pf JT (xf )KT k k h k (25)

+ Kk Rk KT k

The posterior covariance formula holds for any Kk . Like in the standard Kalman Filter we nd out Kk by minimizing tr(Pk ) w.r.t. Kk . 0 = tr(Pk ) Kk (26)

= (Jh (xf )Pf )T Pf JT (xf ) + 2Kk Jh (xf )Pf JT (xf ) + 2Kk Rk k k k h k k k h k Hence the Kalman gain is: Kk = Pf JT (xf ) Jh (xf )Pf JT (xf ) + Rk k h k k k h k Substituting this back in (25) results: Pk = (I Kk Jh (xf ))Pf (I Kk Jh (xf ))Pf JT (xf )KT + Kk Rk KT k k k k k k k h = (I Kk Jh (xf ))Pf k k Pf JT (xf ) k h k Kk Jh (xf )Pf JT (xf ) k k h k Kk Rk KT k KT k = (I Kk Jh (xf ))Pf Pf JT (xf ) Kk Jh (xf )Pf JT (xf ) + Rk k k k k h k k k h = (I Kk Jh (xf ))Pf Pf JT (xf ) Pf JT (xf ) KT k k k k h k k k h = (I Kk Jh (xf ))Pf k k (28)
1

(27)

Summary of Extended Kalman Filter

Model and Observation: xk = f(xk1 ) + wk1 zk = h(xk ) + vk 4

Initialization: xa = 0 with error covariance P0 0 Model Forecast Step/Predictor: xf k Pf k Data Assimilation Step/Corrector: xa xf + Kk (zk h(xf )) k k k Kk = Pf JT (xf ) Jh (xf )Pf JT (xf ) + Rk k k k k h k h Pk = I Kk Jh (xf ) Pf k k
1

f(xa ) k1 = Jf (xa )Pk1 JT (xa ) + Qk1 k1 f k1

Iterated Extended Kalman Filter

In the EKF, h() is linearized about the predicted state estimate xf . The IEKF tries to linearize it k about the most recent estimate, improving this way the accuracy [3, 1]. This is achieved by calculating xa , Kk , Pk at each iteration. k Denote xa the estimate at time k and ith iteration. The iteration process is initialized with xa = xf . k,i k,0 k Then the measurement update step becomes for each i: xa xf + Kk (zk h(xa )) k,i k,i k Kk,i = Pf JT ( k,i ) Jh (xa )Pf JT (xa ) + Rk k,i k,i k h k h x Pk,i = I Kk,i Jh (xa ) Pf k,i k
1

If there is little improvement between two consecutive iterations then the iterative process is stopped. The accuracy reached this way is achieved with higher computational time.

Stability
Qk = Gk GT k Rk = Dk DT k

Since Qk and Rk are symmetric positive denite matrices then we can write: (29) (30)

Denote by and the high order terms resulted in the following subtractions: f(xk ) f(xa ) = Jf (xa )ek + (xk , xa ) k k k h(xk ) h(xa ) k = Jh (xa )ek k + (xk , xa ) k Konrad Reif showed in [2] that the estimation error remains bounded if the followings hold: 5 (31) (32)

1. , , 1 , 2 > 0 are positive real numbers and for every k: Jf (xa ) k Jh (xa ) k (33) (34) (35)

1 I Pk 2 2. Jf is nonsingular for every k

3. There are positive real numbers , , , > 0 such that the nonlinear functions , are bounded via: (xk , xa ) k (xk , xa ) k xk xa k xk
2

with with

xk xa k xk xa k

(36) (37)

xa 2 k

Then the estimation error ek is exponentially bounded in mean square and bounded with probability one, provided that the initial estimation error satises: ek and the covariance matrices of the noise terms are bounded via: Gk GT I k Dk DT k for some , > 0. I (39) (40) (38)

Conclusion

In EKF the state distribution is propagated analytically through the rst-order linearization of the nonlinear system. It does not take into account that xk is a random variable with inherent uncertainty and it requires that the rst two terms of the Taylor series to dominate the remaining terms. Second-Order version exists [4, 5], but the computational complexity required makes it unfeasible for practical usage in cases of real time applications or high dimensional systems.

References
[1] Arthur Gelb. Applied Optimal Estimation. M.I.T. Press, 1974. [2] K.Reif, S.Gunther, E.Yaz, and R.Unbehauen. Stochastic Stability of the Discrete-Time Extended Kalman Filter. IEEE Trans.Automatic Control, 1999. [3] Tine Lefebvre and Herman Bruyninckx. Kalman Filters for Nonlinear Systems: A Comparison of Performance. [4] John M. Lewis and S.Lakshmivarahan. Dynamic Data Assimilation, a Least Squares Approach. 2006. [5] R. van der Merwe. Sigma-Point Kalman Filters for Probabilistic Inference in Dynamic State-Space Models. Technical report, 2003.

Figure 1: The block diagram for Extended Kalman Filter

You might also like