Kalman filter

Authors
Affiliation

Ellen L. Hamaker

Methodology & Statistics Department, Utrecht University

Sophie W. Berkhout

Methodology & Statistics Department, Utrecht University

Published

2025-07-11

This article has not been peer-reviewed yet and may be subject to change.
Want to cite this article? See citation info.

This article is about the Kalman filter (Kalman, 1960), which is an often used algorithm to analyze N=1 time series data. While for some N=1 time series models there are simpler ways to estimate the parameters (cf. Hamaker et al., 2002), the advantage of the Kalman filter is that it can host a very wide variety of models, including multivariate time series models and models with latent variables. Moreover, since the Kalman filter can be used to obtain maximum likelihood estimates, it offers you the opportunity to compare the fit of different models via likelihood-ratio tests or information criteria.

It is not really necessary to know how the Kalman filter works exactly to be able to use it. Yet, having a basic understanding of it will help you see how for instance missing data or unequally spaced observations can be dealt with when estimating time series models (Durbin & Koopman, 2012; Harvey, 1989). Moreover, the Kalman filter also forms the basis of estimation techniques that have been developed for more advanced modeling challenges based on N>1 intensive longitudinal data (ILD), such as multilevel extensions of time series models (Asparouhov et al., 2018). This makes it a rather central tool in dynamic modeling.

To help you develop a basic appreciation of what the Kalman filter is, you can read more below about: 1) the state-space model, which forms the foundation of the Kalman filter; 2) how the Kalman filter based on the state-space model can be used for estimation of the latent states, as well as the unknown parameters of a time series model; and 3) how missing data are dealt with in this framework.

1 State-space model

The Kalman filter is based on the state-space model, which is presented only briefly in this article. While this model can be interpreted as a latent (vector) autoregressive model—where the latent processes are referred to as states—there is a wide variety of time series models that can be specified within this framework. Hence, it may be better to refer to it as the state-space framework, and say that you can specify a model in state-space format.

The state-space framework that is presented here is characterized by time-invariant model parameters; while this is the simplest form of the state-space model, there are other forms that allow for time-varying parameters as well (e.g., Kim & Nelson, 1999). In any case, the state-space framework is based on two equations: the measurement equation and the transition equation.

The measurement equation is used to relate the vector \(y_t\) with observations at occasion \(t\) to the underlying states \(a_t\), through

\[ y_t = d + S a_t + e_t\]

where \(d\) contains the intercepts, \(S\) contains the factor loadings, and \(e_t\) is a vector with residuals that can be thought of as measurement error. The latter are assumed to come from a normal distribution with means of zero and covariance matrix \(R\). One way to think about this equation is that it is used to relate observed variables to latent variables. However, in some applications the measurement errors are absent (i.e., \(R\) is a null matrix), such that the states in \(a_t\) are not really latent variables.

The states \(a_t\) evolve over time according to a dynamic process, which is captured in the transition equation, that is

\[a_t = c + H a_{t-1} + G z_t\]

where \(c\) is a vector with intercepts, \(H\) is a matrix with structural coefficients that capture the dynamics of the process, \(z_t\) is a vector with residuals, and \(G\) is an additional weight matrix (which can be very helpful in specifying particular models, as shown in the article on the state-space model). The residuals in this equation are also assumed to come from a multivariate normal distribution with mean zero and covariance matrix \(Q\).

Tara is interested in studying depressed mood in a particular person, and decides to use a daily diary study for this. She obtains a large number of repeated observations from the participant using the 5 items: sad, hopeless, lonely, down, and worthless.

Tara wants to use these items as indicators of an underlying factor (i.e., a state), which she interprets as depressed mood. She considered just performing a factor analysis on the data for this purpose. But Tara believes that the temporal order of the observations matters, and that the underlying depressed mood factor does not fluctuate randomly over time.

Instead, Tara wants to consider a first-order autoregressive model for the latent process. Therefore, she decides to make use of a state-space model to specify the model that she is interested in for these data.

Washington has obtained N=1 time series data using a single item that asked the participant to indicate how happy they have been during the past day. Although there is only a single item, Washington is concerned that there will be measurement error in these scores, due to momentary influences and imprecision in answering the question.

After reading the article on reliability for single item measures, Washington decides to use a model in which the measurement error can be separated from the underlying process, assuming the latter is a first-order autoregressive model. This is a special case of the state-space model, and it can therefore be estimated with a Kalman filter.

The two equations that make up the state-space model strongly suggest that it is a latent first-order (vector) autoregressive model: The transition equation can be understood as a [first-order vector autoregressive model], and the measurement equation can be seen as a way to account for the noise in the observations. However, the state-space model is very flexible and allows you to specify various models within this framework. These include models with lagged factor loadings, moving-average terms, and higher-order autoregressive components. This makes the state-space model an encompassing framework for all sorts of time series models, both univariate and multivariate, with and without measurement error.

Depending on your observed data and the model that you specify, both \(y_t\) and \(a_t\) can be either univariate or multivariate. For instance, for some models, \(y_t\) will be univariate while \(a_t\) is multivariate; for other models it will be the other way around. In this article, the vectors \(y_t\) and \(a_t\) are referred to in the singular (e.g., “a state”, “the observation”), covering both the univariate and the multivariate cases.

2 Estimation with the Kalman filter

The Kalman filter can be used for two distinct estimation problems in the context of a state-space model: It can be used to estimate the latent trajectory of the states \(a_t\) over time, or it can be used to estimate unknown parameters of the state-space model in the model matrices \(d\), \(S\), \(R\), \(c\), \(H\), \(G\) and \(Q\). Both are discussed in more detail below.

2.1 Estimating the latent state trajectory

Originally, the Kalman filter algorithm was developed to estimate the latent state \(a_t\) based on the noisy observations \(y_t\) and their past values, and to make predictions of future states \(a_{t+1}\), \(a_{t+2}\), et cetera (Kalman, 1960). In this estimation problem, all the model parameters in \(d\), \(S\), \(R\), \(c\), \(H\), \(G\) and \(Q\) are (assumed to be) known.

A particular application of the Kalman filter for this purpose was in tracking the Apollo spacecraft, to determine its trajectory and decide whether adaptions in terms of speed and direction were needed (Suddath et al., 1967). Nowadays, the Kalman filter is still used in self-driving cars, to determine their current position (i.e., state), based on noisy data that come from multiple sensors, and to make predictions about future states.

The Kalman filter consists of repeating the same series of steps for every occasion, from the beginning to the end of the observed time series. These steps are visualized in Figure 1.

Figure 1: Visualization of the steps of the Kalman filter over time. First, a prediction is made for the state at occasion \(t\) (i.e., \(a_t\)) based on all information available up to time point \(t-1\); this prediction is referred to as \(a_{t|t-1}\). Using this prediction, a prediction is made for the observation at occasion \(t\) based on all information available up to occasion \(t-1\); this prediction is referred to as \(y_{t|t-1}\). This prediction is then compared to the observed \(y_t\), to obtain the one-step-ahead prediction error, referred to as \(r_{t|t-1}\). Subsequently, this information is used to update the estimate of the latent state, giving the updated state estimate \(a_{t|t}\). This updated estimate is then useed to predict the subsequent state \(a_{t+1|t}\) (i.e., the prediction of \(a_{t+1}\) based on all information available up to \(t\)), and so on.

To estimate the state at occasion \(t\), first a prediction is made of \(a_t\) based on all the information available up to the previous occasion (i.e., \(t-1\)); this prior information is contained in \(a_{t-1|t-1}\), which is the state estimate for occasion \(t-1\). The predicted state is obtained using

\[ a_{t|t-1} = c + H a_{t-1|t-1}. \]

With this predicted state, a prediction is made for the observation, using

\[ y_{t|t-1} = d + S a_{t|t-1}. \]

When the observation at \(t\) becomes available, the one-step-ahead prediction error can be determined, which is the deviation between the observed score and its prediction, that is

\[r_{t|t-1} = y_t - y_{t|t-1}.\]

This one-step-ahead prediction error is partly determined by measurement error (i.e., \(e_t\)), but also by the discrepancy between the unknown true state \(a_t\) and the predicted state \(a_{t|t-1}\).

Subsequently, the update equations are ran to obtain the updated state estimate \(a_{t|t}\), which is obtained using

\[ a_{t|t} = a_{t|t-1} + K_t r_{t|t-1}, \]

where the matrix \(K_t\) is known as the Kalman weight matrix. This matrix is based on the covariance matrix of the one-step-ahead prediction error \(r_{t|t-1}\), but also on the covariance matrix of the error in the prediction of the state, that is, the difference between the true (but unknown) state \(a_t\), and the predicted state \(a_{t|t-1}\) (cf. Kim & Nelson, 1999).

Tara wants to use the five indicators sad, hopeless, lonely, down and worthless to track a participant’s daily fluctuations in their underlying depressed mood; she believes the latter evolves over time as a first-order autoregressive process.

If Tara knows the factor loadings, residual variances, intercepts and the autoregression that characterizes the latent process, she can make a prediction for today’s depressed mood based on the estimate of depressed mood yesterday. Such forecasts ahead of time can be useful in monitoring the participant’s mood. For instance, when the forecast shows that the mood of her participant is likely to be particularly low, Tara may reach out to them to check how they are doing.

Washington has collected daily diary data with a single item that measures a participant’s happiness. He wants to identify when there were large shifts in the happiness scores between two consecutive occasions, as these may signal that something unusual happened. However, he does not want to use changes in the observed scores, as such change scores are known to be unreliable when the original measures contain measurement error.

Therefore, Washington runs the Kalman filter and obtains the latent state estimates. He then takes the difference of the latent scores (rather than the observed scores), and uses these to detect when the largest jumps between two consecutive days occurred. Because he is accounting for measurement error in his estimation of the underlying states, he does not have to be worry about the difference scores being notoriously unreliable.

The estimated latent state can then be used in monitoring a person’s fluctuations over time, and to intervene when the process seems to get off track. In that sense, the use of the Kalman filter in the context of psychological processes can be very similar to how it was used originally in monitoring the trajectory of the Apollo spacecraft.

2.2 Initiating the Kalman filter

The algorithm described above—consisting of first predicting and then updating the latent state estimate—has to be repeated for each and every occasion in the time series, and has to be done from beginning (\(t=1\)) to end (\(t=T\)). This implies that, to be able to run the Kalman filter, you need to start it up at the first occasion: To be able to make the prediction \(a_{1|0}\), you need \(a_{0|0}\)).

Because there is no information about the process before the first occasion, you need to choose an initial state vector estimate \(a_{0|0}\) as well as its covariance matrix (which captures the uncertainty you have about how good an estimate \(a_{0|0}\) is of the true state \(a_{0}\)). With these in place, you can estimate the entire latent series \(a_t\) using \(a_{t|t}\) obtained through running the Kalman filter equations.

3 Estimating the unknown model parameters

To estimate the states \(a_t\) over time using the Kalman filter approach described above, you have to know all the values of the parameters in the model matrices \(d\), \(S\), \(R\), \(c\), \(H\), \(G\) and \(Q\). Of course, in many applications, you actually do not know these values, and you want to estimate (some of) them. In that case you can plug specific byproducts from the Kalman filter into a likelihood function, which is then maximized with respect to the unknown parameters.

You can understand this procedure as follows. First, starting values for all the unknown parameters are generated (this is usually done implicitly by the software but users can also specify them explicitly). Given these particular values, the Kalman filter described above is run for each occasion. Hence, at each occasion a prediction is made of the state, and based on this a prediction is made of the observation. When the observation is available at that occasion, the difference between the observation and the prediction is computed to get the one-step-ahead prediction error. This one-step-ahead prediction error along with its covariance matrix (which is included in the optimal Kalman gain matrix \(K_t\)), are used in the update equations to get an updated estimate of the latent state. These steps of the procedure are represented with the four rectangles (in blue and green) on the right of Figure 2.

Figure 2: Schematic overview of the Kalman filter used for estimating model parameters. The procedure starts with specifying the initial state estimate \(a_{0|0}\) and its covariance matrix (capturing the uncertainty about the estimate), as well as starting values for all the unknown model parameters. Subsequently, the algorithm cycles through the same sequence of steps (prediction and updating) from beginnging (\(t=1\)) to end \(t=T\), after which the parameter values are changed and the whole sequence is done again, etc..

To obtain maximum likelihood estimates of the model parameters, the one-step-ahead prediction error and its covariance matrix are included in the likelihood function, as indicated in the orange rectangle in Figure 2. Subsequently, it is determined whether the end of the series is reached yet. If the end is not yet reached (i.e., \(t<T\)), the procedure is repeated for the next occasion. When the end is reached (i.e., \(t=T\)), the likelihood for this set of parameter values can be determined. Subsequently, a different set of parameter values is picked, \(t\) is set to the beginning (i.e., \(t=1\)), and the filter is started again.

The likelihood value that is obtained when \(t=T\) is specific to the particular parameter values that were used to run the filter. The procedure is repeated to find the parameter values that maximize the likelihood; these are then referred to as the maximum likelihood estimates.

Tara wants to see how strongly the indicators sad, hopeless, lonely, down and worthless are related to the underlying factor that represents depressed mood. She therefore estimates the factor loadings freely, along with their standard errors.

Tara also considers running a second model in which she constrains the factor loadings to be equal to each other. This model is a special case of the first model in which the factor loadings were estimated freely; she can thus do a log-likelihood difference test to determine whether the factor loadings can be constrained or not. If they can be constrained, this means that a particular increase in the underlying factor is associated with the same expected change in each indicator.

Washington estimates the latent first-order autoregressive model using daily measurements of happiness. He is particularly interested in the size of the measurement error variance: Is this large compared to the total variance of the observed series?

To determine the proportion of variance due to measurement error, Washington computes the total variance of the series based on the estimated model parameters, and divides the measurement error variance by the total variance. He obtains a proportion of 0.46, but now he wonders how precise this estimate is.

To obtain more insight in this, he considers using Bayesian estimation based on the Kalman filter; that way, he can compute the proportion at every iteration of the Gibbs sampler, and thereby obtain a posterior distribution for the proportion, which he can then use to determine the 95% credible interval around the point estimate.

4 Missing data and unequally spaced observations

It is common to have missing observations in a time series (and thus also in N>1 ILD). Moreover, in may ILD studies, the observations are obtained using unequal time intervals between them.
Both data features can be easily dealt with within a Kalman filter approach, both in terms of estimating the states as in terms of estimating the model parameters (Durbin & Koopman, 2012; Harvey, 1989).

4.1 How to handle missing data in the Kalman filter

If your time series has missing observations, the algorithm that was described above will no longer work. However, only with a slight adjustment, the procedure can handle missing observations without any problem. This is visualized in Figure 3, and further explained below.

Figure 3: Schematic overview of the Kalman filter used for estimating model parameters showing how missing observations are dealt with.

After the state and the observation are predicted, it is checked whether there is an observation available for this occasion, as shown in the yellow diamond on the right of Figure 3. When the observation at occasion \(t\) is available, the procedure as described before is followed. However, if it is missing, the one-step-ahead prediction error \(r_{t|t-1}\) cannot be computed. This has two consequences.

First, it implies that the state prediction \(a_{t|t-1}\) cannot be updated using the one-step-ahead prediction error. Instead, the Kalman filter will set the updated state \(a_{t|t}\) equal to the predicted state \(a_{t|t-1}\), and use this to make a prediction for the next state at \(t+1\) (i.e., \(a_{t+1|t}\)).

Second, the by-products of the Kalman filter that are used in the likelihood function are not available when there is no observation. This means that for this occasion there is no contribution to the likelihood function.

4.2 Kalman filter versus other methods for missing data

The way the Kalman filter deals with missing data is different from imputation or case-wise deletion. In imputation, data are created to fill the missing values, thereby creating a complete data set. The imputed values are then treated in the analysis as if these are regular observations. To avoid bias in the parameter estimates and underestimation in the standard errors, this procedure needs to be repeated multiple times. While the Kalman filter does result in a predicted observation (i.e., \(y_{t|t-1}\)), this only occurs for the occasions where there is an observation available; hence, for occasions where \(y_t\) is missing, \(y_{t|t-1}\) is not created.

Case-wise deletion is another popular way to deal with missing observations, in particular in the context of regression analysis. It implies that all the cases (here: occasions) where a predictor and/or the outcome is missing are simply dropped from the analysis. A regression analysis approach can be used to estimate the parameters of autoregressive models (without measurement error), but it has the disadvantage that every missing observation leads to multiple occasions having to be dropped: Every observation that serves as the outcome at a particular occasion, also serves as (one of) the predictor(s) at other occasion(s). Hence, dropping all these from the analysis can seriously reduce the sample size (in terms of occasions) on which the analysis is based.

The Kalman filter does not require any particular action from the user, and can easily handle large numbers of missing data. As long as the missing values are missing at random, the Kalman filter will result in maximum likelihood estimates, and the standard errors will correctly reflect the uncertainty of these point estimates. Also, having multiple consecutive missing observations is not a problem for the algorithm; it simply results in state estimates that are closer to the long run average of the state, while the parameter estimates are not affected by this.

4.3 Kalman filter and unequally spaced observations

The problem of unequally spaced observations can be transformed into a problem of missing data. Consequently, by using a Kalman filter that can handle missing data, the problem of varying time intervals between your original observations can be adequately dealt with in the analysis.

Missing observations are common in ILD studies based on [self-report] (e.g., due to missed prompts) and [passive sensing] (e.g., due to equipment failure), and the combination of sampling design and modeling assumptions may also require additional missing values. Specifically, when using experience sampling methods (ESM) or signal-contingent ecological momentary assessments (EMA), the observations are intentionally made at random time points throughout the day. The idea is that by using such an approach, participants cannot anticipate the next measurement moment and it will allow researchers to capture them as they are living their daily life (Stone et al., 2007). However, the unequal time intervals between consecutive observations does not fit well with many of the dynamic modeling techniques that are based on the assumption that intervals are of the same length.

To handle such unequally spaced observations, you can add missing values in between observations with longer intervals, such that the measurement occasions become approximately equally spaced. Obviously, this can result in having a great number of missing observations, but if you use an algorithm like the Kalman filter which can handle missingness, this is not a problem. A comparison between this approach and a continuous time modeling approach for univariate and multivariate autoregressive models can be found in Haan-Rietdijk et al. (2017).

Washington has a second data set consisting of momentary measures of happiness that were obtained from a single person by using experience sampling at semi-random time points throughout the day for several days. Characteristic of these data is that the time interval between measurements varies, which in turn implies that the strength of the relation between an observation and the previous observation will vary.

One way to deal with this is by taking a [continuous time modeling approach]. But Washington decides to insert missing values in between the observations, to make sure the the occasions in the data file become approximately equally spaced in time. Although this implies that there are quite a few missing observations and that often multiple consecutive observations are missing, this is not a problem in the analysis, as Washington uses a Kalman filter to estimate the models of interest.

5 Extensions of the Kalman filter

In this article, you have seen a presentation of a very basic version of the Kalman filter. There are in fact many extension available, some of which are mentioned below to ensure you see the broadness of this modeling framework.

5.1 Exogenous variables

A typical way in which the Kalman filter can be extended is by having a state-space model that also includes exogenous variables. These may then be included in the measurement equation and/or in the transition equation.

Washington has daily diary data of happiness from a single participant. He wonders whether this person tends to feel happier during the weekend than during weekdays, and whether there is perhaps an increasing or decreasing trend over time. Moreover, he wonders whether the hours of sunshine each day also have an effect on this person’s happiness.

To account for such patterns, Washington considers extending the model with various exogenous variables. For instance, he wants to include a dummy variable for the weekend, a linear trend for time since the beginning of the study, and a variable that captures the number of hours of sunshine each day of the study. As Washington believes these exogenous variables are likely to have an influence on the underlying happiness, he wants to include them in the transition equation rather than in the measurement equation.

While it is also possible to include exogenous variables into the simple state-space model presented above, this may require some less elegant ways of specifying the model in state-space format. Moreover, when they have to be included in the vector with outcomes (i.e., \(y_t\)), then adding them to or dropping them from the model would imply that the likelihood is taken over different data, rendering the comparison between such models impossible based on their likelihoods (including the information criteria based on these).

5.2 Time-varying parameters

There are also extensions that allow for the parameters in the model matrices to vary over time. A particular way in which this can be achieved, is by reversing the location of the parameters and the variables in the model: For instance, by including an observed exogenous variable in the matrix with \(S\) that relates the observed outcome \(y_t\) to the underlying state \(a_t\) (thereby making this \(S_t\) rather than \(S\)), the regression coefficient of the contribution of this exogenous variable can be further modeled in the state equation (Durbin & Koopman, 2012; Kim & Nelson, 1999). Examples of this in the context of psychological processes can be found in Chow et al. (2009) and Molenaar (1987).

5.3 Regime-switching

Another way in which parameters may change over time is by extending the state-space model with regime switching. This is the focus on the book by Kim & Nelson (1999), who developed a Kalman filter procedure that allows for switches between different regimes over time according to a [hidden Markov model], while each regime is characterized by its own state-space model with its own model parameters. Examples of this in the context of psychological processes can be found in Hamaker et al. (2016) and Hamaker & Grasman (2012).

6 Think more about

At its core, the Kalman filter is an online estimation procedure for prediction problems, based on making predictions about the (near) future (also referred to as forecasts), and updating these estimates in light of the information that becomes available when time progresses. However, many of the analysis problems in ILD research are not of such an online nature; rather, the whole series is already observed, and the interest is in estimating the parameters of the process for the purpose of description or perhaps also for the study of causation. For such problems, you can also use an off-line procedure known as the Kalman smoother; this also results in estimates of the states, but uses both prior and future information for this (Durbin & Koopman, 2012). The Kalman smoother can also be used to estimate the model parameters, but this is based on an estimation and maximization algorithm and may not result in maximum likelihood estimates.

A particularly challenging aspect of using the Kalman filter is how to specify the initial state and its covariance matrix. Although there is some general advice on how to specify these, and to some extent the choices made there will fade out over time (when the time series is long enough), there is a problem if you want to use information criteria (like the AIC or BIC) to compare models that have different numbers of elements in their state vector. How to handle this, is well beyond the scope of MATILDA, but you can find more on this in the chapter on initialization of the Kalman filter in Durbin & Koopman (2012).

Finally, while the estimation of states using the Kalman filter can be thought of as a form of Bayesian inference, as it produces a prior for the state based on previous information, and then combines this with information from the current observation, a Bayesian approach can also be used in the estimation of the model parameters. You can read more on this in the context of a regime-switching state-space model in Kim & Nelson (1999).

7 Takeaway

The Kalman filter allows you to estimate a wide variety of (latent) time series models. It can be used to estimate the underlying latent trajectory based on noisy observations, but it can also be used to estimate the unknown parameters of your time series model. To be able to use the Kalman filter for these purposes, you have to formulate your model as a state-space model.

The Kalman filter handles missing observations during analysis, and does not rely on data imputation or deletion of data. When data are missing at random, it will give you maximum likelihood estimates. It has been shown to be able to handle large amounts of missing data, making it an attractive approach for ILD studies based on self-report or sampling designs with unequally spaced observations.

8 Further reading

We have collected various topics for you to read more about below.

Read more: State-space model
Read more: Check model fit
  • [White noise]
Read more: Univariate time series models
Read more: Multivariate time series models
  • [Vector autoregressive models]
  • [Vector moving-average models]
  • [Vector autoregressive moving-average models]

Acknowledgments

This work was supported by the European Research Council (ERC) Consolidator Grant awarded to E. L. Hamaker (ERC-2019-COG-865468).

References

Asparouhov, T., Hamaker, E. L., & Muthén, B. (2018). Dynamic structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 25, 359–388. https://doi.org/10.1080/10705511.2017.1406803
Chow, S.-M., Ho, M.-H. R., Hamaker, E. L., & Allaire, J. C. (2009). Using innovative outliers to detect discrete shifts in dynamics in group-based state-space models. Multivariate Behavioral Research, 44, 465–496. https://doi.org/10.1080/00273170903103324
Durbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods (2nd ed.). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199641178.001.0001
Haan-Rietdijk, de, S., Voelkle, M., Keijsers, L., & Hamaker, E. L. (2017). Discrete- vs. Continuous-time modeling of unequally spaced experience sampling method data. Frontiers in Psychology, 8, 1849. https://doi.org/10.3389/fpsyg.2017.01849
Hamaker, E. L., Dolan, C. V., & Molenaar, P. C. M. (2002). On the nature of SEM estimates of ARMA parameters. Structural Equation Modeling, 9, 347–368. https://doi.org/10.1207/S15328007SEM0903_3
Hamaker, E. L., & Grasman, R. P. P. P. (2012). Regime switching state-space models applied to psychological processes: Handling missing data and making inferences. Psychometrika, 77, 400–422. https://doi.org/10.1007/S11336-012-9254-8
Hamaker, E. L., Grasman, R. P. P. P., & Kamphuis, J. H. (2016). Modeling BAS dysregulation in bipolar disorder: Illustrating the potential of time series analysis. Assessment, 23, 436–446. https://doi.org/10.1177/1073191116632339
Harvey, A. C. (1989). Forecasting, structural time series models and the Kalman filter. University Press. https://doi.org/10.1017/CBO9781107049994
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Journal of Basic Engineering: Transactions of the ASME Series D, 82, 35–45. https://doi.org/10.1115/1.3662552
Kim, C-J, & Nelson, C. R. (1999). State-space models with regime switching: Classical and Gibbs-sampling approaches with applications. The MIT Press. https://doi.org/10.7551/mitpress/6444.001.0001
Molenaar, P. C. M. (1987). Dynamic assessment and adaptive optimization of the psychotherapeutic process. Behavioral Assessment, 9, 389–416. https://doi.org/10.1007/BF00959854
Stone, A. A., Shiffman, S., Atienza, A. A., & Nebeling, L. (2007). The science of real-time data capture: Self-reports in health research. Oxford Academic. https://doi.org/10.1093/oso/9780195178715.001.0001
Suddath, J. H., Kidd, R. H., & G., Reinhold. A. (1967). A linearized error analysis of onboard primary navigation systems for the apollo lunar module, NASA TN d-4027. https://ntrs.nasa.gov/api/citations/19670025568/downloads/19670025568.pdf

Citation

BibTeX citation:
@article{hamaker2025,
  author = {Hamaker, Ellen L. and Berkhout, Sophie W.},
  title = {Kalman Filter},
  journal = {MATILDA},
  number = {2025-07-11},
  date = {2025-07-11},
  url = {https://matilda.fss.uu.nl/articles/kalman-filter.html},
  langid = {en}
}
For attribution, please cite this work as:
Hamaker, E. L., & Berkhout, S. W. (2025). Kalman filter. MATILDA, 2025-07-11. https://matilda.fss.uu.nl/articles/kalman-filter.html