# The Extended Kalman Filter: An Interactive Tutorial for Non-Experts – Part 5

## The Extended Kalman Filter: An Interactive Tutorial for Non-Experts

### Part 5: Computing the Gain

So now we have a formula we can actually use for computing the current state estimate $\hat{x}_k$ based on the previous estimate $\hat{x}_{k-1}$, the current observation $z_k$, and the current gain $g_k$:

$\hat{x}_k = \hat{x}_{k-1} + g_k(z_k – \hat{x}_{k-1})$

So how do we compute the gain? The answer is: indirectly, from the noise. Recall that each observation is associated with a particular noise value:
$z_k = x_k + v_k$
We don’t know the individual noise value for an observation, but we typically do know the average noise: for example, the published accuracy of a sensor tells us approximately how noisy its output is. Call this value $r$; there is no subscript on it because it does not depend on time, but is instead a property of the sensor.

Then we can compute the current gain $g_k$ in terms of $r$:

$g_k = p_{k-1}/(p_{k-1} + r)$

where $p_k$ is a prediction error that is computed recursively:

$p_k = (1 – g_k)p_{k-1}$

As with the state-estimation formula, let’s think about what these two formulas mean before we continue.

Let’s say that the error $p_{k-1}$ on our previous prediction was zero. Then our current gain $g_k$ will be $0/(0+r)=0$, and our next state estimate will be no different from our current state estimate. Which makes sense, because we shouldn’t be adjusting our state estimate if our prediction was accurate. At the other extreme, say the prediction error is one. Then the gain will be $1 / (1 + r)$. If $r$ is zero — i.e., if there is very little noise in our system — then the gain will be one, and our new state estimate $x_k$ will be strongly influenced by our observation $z_k$. But as $r$ grows large, the gain can become arbitrarily small. In other words, when the system is noisy enough, a bad prediction will have to be ignored. Noise overcomes our ability to correct bad predictions.

What about the third formula, calculating the prediction error $p_k$ recursively from its previous value $p_{k-1}$ and the current gain $g_k$ ? Again, it helps to think of what happens for extreme values of the gain: when $g_k = 0$, we have $p_k = p_{k-1}$. So, just as with the state estimation, a zero gain means no update to the prediction error. When on the other hand $g_k = 1$, we have $p_k = 0$. Hence the maximum gain corresponds to zero prediction errors, with the current observation alone used to update the current state.

Previous:
Next:

 Technically $r$ is really the variance of the noise signal; i.e., the spread, or squared average distance of individual noise values from their mean. The Kalman Filter will work equally well if this noise variance is allowed to change over time, but in most applications it can be assumed constant. In a likewise manner, $p_k$ is technically the covariance of the estimation process at step $k$; it is the average of the squared error of our predictions. Indeed, as Tim Wilkin has pointed out to me, the state is a stochastic [random-like] variable/vector (an instantaneous value of a stochastic process) and it doesn’t have a “true” value at all! The estimate is merely the most likely value for the process model describing the state.