## The Extended Kalman Filter: An Interactive Tutorial for Non-Experts

### Part 19: The Jacobian

To answer our second question – how to generalize our single-valued nonlinear state/observation model to a multi-valued systems – it will be helpful to recall the equation for the sensor component of our linear model:

\[ z_k = C x_k \]

For a system with two state values and three sensors, we can rewrite this as:

\[

z_k =

\begin{bmatrix}

c_{11} & c_{12} \\

c_{21} & c_{22} \\

c_{31} & c_{32} \\

\end{bmatrix}

\begin{bmatrix}

x_{k1} \\

x_{k2}

\end{bmatrix}

=

\begin{bmatrix}

c_{11}x_{k1} + c_{12} x_{k2}\\

c_{21}x_{k1} + c_{22} x_{k2}\\

c_{31}x_{k1} + c_{32} x_{k2}

\end{bmatrix}

\]

Although you may not have seen notation like this before, it is pretty straightforward: the double numerical index on the elements of the $C$ matrix indicates the row and column position of each element, but more importantly, the relationship being expressed. For example, $c_{12}$ is the coefficient (multiplier) relating the current value $z_{k1}$ of the first sensor to the second component $x_{k2}$ of the current state.

For a nonlinear model, there will likewise be a matrix whose number of rows equals the number of sensors and number of columns equals the number of states; however, this matrix will contain *the current value of the first derivative of the sensor value with respect to that state value*. Mathematicians call such a derivative a partial derivative, and the matrix of such derivatives they call the Jacobian. Computing the Jacobian is beyond the scope of the present tutorial ^{[17]}, but this Matlab-based EKF tutorial and this Matlab-based implementation with GPS examples show that it involves relatively little code.

If these concepts seems confusing, think about a survey in which a group of people is asked to rate a couple of different products on a scale (say, 1 to 5). The overall score given to each product will be the average of all the people’s ratings on that product. To see how one person influenced the overall rating for a single product, we would look at that person’s rating on that product. Each such person/product rating is like a partial derivative, and the table of such person/product ratings is like the Jacobian. Replace people with sensors and issues with states, and you understand the sensor model of the Extended Kalman Filter.

All that remains at this point is to generalize our nonlinear sensor/state model to the state-transition model. In other words, our linear model

\[x_k = A x_{k-1} + w_k \]

becomes

\[x_k = f(x_{k-1}) + w_k \]

where $A$ is replaced by the Jacobian of the state-transition function $f$. In fact, the convention is to use $F_k$ for this Jacobian (since it corresponds to the function $f$ and changes over time), and to use $H_k$ for the Jacobian of the sensor function $h$. Incorporating the control signal $u_k$ into the state-transition function, we got the “full Monty” for the Extended Kalman Filter that you are likely to encounter in the literature:

**Model:**

$z_k = h(x_{k}) + v_k$

**Predict:**

$P_k = F_{k-1} P_{k-1} F^T_{k-1} + Q_{k-1}$

**Update:**

$\hat{x}_k \leftarrow \hat{x}_{k} + G_k(z_k – h(\hat{x}_{k}))$

$P_k \leftarrow (I – G_k H_k) P_k$

**Previous**: Computing the Derivative

**Next**: TinyEKF

[17] In most EKF examples I’ve seen, the state transition function is simply the identity function $f(x) = x$. So its Jacobian is just the identity matrix described in Section 12. Likewise for the observation function $h$ and its Jacobian $H$.