State-space model

Authors
Affiliation

Ellen L. Hamaker

Methodology & Statistics Department, Utrecht University

Sophie W. Berkhout

Methodology & Statistics Department, Utrecht University

Published

2025-07-11

This article has not been peer-reviewed yet and may be subject to change.
Want to cite this article? See citation info.

This article is about the state-space model, and is closely related to the article about the Kalman filter. The Kalman filter is an algorithm that was developed originally to estimate the latent (i.e., unobserved) states and to predict future states of a process based on noisy N=1 time series data. Within the context of process research in psychology, the Kalman filter is often used to obtain maximum likelihood estimates of the parameters of a [time series model].

If you want to use the Kalman filter to analyze your data, you have to formulate your model of interest in state-space format. Although this format may seem somewhat restricted at first, because it is based on two particular equations—one that allows for measurement error, and one that allows for first-order auto- and cross-lagged regressions—there is a wide variety of models that can be accommodated by this framework (cf. Durbin & Koopman, 2012; Harvey, 1989; Kim & Nelson, 1999). This requires some creativity at times, and this article is meant to show you some typical examples of how to specify common time series models using these two basic equations. This should give you an impression of how this can be done.

Below you can read more about: 1) the two basic equations that make up the state-space model; 2) how commonly used univariate time series models can be specified in this framework; 3) how typical multivariate time series models can be specified in this framework; and 4) how latent time series models can be specified in this framework.

1 State-space model

The state-space model is a framework that consisting of two equations:

  • a measurement equation that relates the observed variables at occasion \(t\) (i.e., \(y_t\)) to the latent states at that occasion (i.e., \(a_t\));

  • a transition equation that relates the latent states at occasion \(t\) (i.e., \(a_t\)) to themselves at the previous occasion (i.e., \(a_{t-1}\)).

Both are discussed in more detail below.

1.1 The measurement equation

The measurement equation of the state-space model relates the vector with \(q\) observed variables (i.e., \(y_t\)) to the vector with \(n\) latent variables (i.e., \(a_t\)) that are assumed to underlie the observations. The expression for this is

\[ y_t = d + S a_t + e_t\]

where

  • \(d\) is a vector with \(q\) intercepts

  • \(S\) is a (\(q\) by \(n\)) matrix with factor loadings by which the observed variables \(y_t\) are regressed on the state variables \(a_t\); this matrix has \(q\) rows and \(n\) columns

  • \(e_t\) is a vector with \(q\) residuals; these are the parts of the observations in \(y_t\) that cannot be accounted for by the underlying latent variables in \(a_t\), and can thus be interpreted as measurement errors (and/or unique errors); the residuals are typically assumed to come from a multivariate normal distribution with mean vector zero, and (\(q\) by \(q\)) covariance matrix \(R\).

1.2 The transition equation

The transition equation, also referred to as the state equation, is used to predict the current state \(a_t\) based on the preceding state \(a_{t-1}\). The expression for this is

\[a_t = c + H a_{t-1} + G z_t\]

where

  • \(c\) is a vector with \(n\) intercepts

  • \(H\) is a (\(n\) by \(n\)) matrix with structural coefficients, relating the current state to the previous state of the system

  • \(z_t\) is a vector with residuals; typically these are assumed to come from a normal distribution with mean zero and (\(n\) by \(n\)) covariance matrix \(Q\)

  • \(G\) is an additional (\(n\) by \(n\)) weight matrix that can be very helpful in specifying particular models within this framework (as will become clear below); if there is no \(n\)ed for such weighting, it can take on the form of an identity matrix (with 1’s on the diagonal and 0’s elsewhere), which is the same as dropping this matrix from the equation.

1.3 Conclusion

At first, the state-space model presented above may seem rather limited: It can be interpreted as a model with a first-order (vector) autoregressive structure at the latent level (i.e., in the transition equation), and additional measurement error at the observed level (i.e., in the measurement equation).

But the state-space framework actually allows for a very wide variety of time series models. It simply requires you to reformulate the model you are interested in such that it fits within the two equation presented above. Hence the state-space model can be thought of as a broad framework, rather than a narrow model (somewhat akin to the structural equation model, which also encompasses a wide variety of models). In the following, we show how specify several typical time series models in the state space framework.

2 Univariate time series models in state-space format

The state-space framework and the Kalman filter can be used to estimate the parameters from typical [univariate time series models], such as the autoregressive moving average models. In this section, you can see several examples of this.

2.1 Autoregressive models

In an autoregressive model the observation \(y_t\) is regressed on earlier versions of itself. The order of the model determines how many preceding versions are used. For instance, a first-order autoregressive model is based on regressing \(y_t\) on \(y_{t-1}\). This can be expressed as \(y_t = c + \phi_1 y_{t-1} + \epsilon_t\). This model fits quite easily within the state-space format: It requires a univariate version of the transition equation, and there is no measurement error in the measurement equation so that \(y_t\) equals \(a_t\).

Higher-order autoregressive models are a bit more challenging to reformulate such that they fit within the state-space framework. Consider the second-order autoregressive model, which can be written as \(y_t = c + \phi_1 y_{t-1} + \phi_2 y_{t-2} + \epsilon_t\). To express this in state-space format, it requires you to save both \(y_{t-1}\) and \(y_{t-2}\) in the state vector \(a_{t-1}\) (i.e., \(a_{t-1} = \begin{bmatrix} y_t\\ y_{t-1}\end{bmatrix}\)) so \(y_t\) can be regressed on both of them. The transition equation then becomes

\[ \begin{bmatrix} y_t\\ y_{t-1}\end{bmatrix} = \begin{bmatrix}c\\ 0 \end{bmatrix} + \begin{bmatrix} \phi_1 & \phi_2\\ 1 & 0 \end{bmatrix} \begin{bmatrix} y_{t-1}\\ y_{t-2}\end{bmatrix} + \begin{bmatrix} \epsilon_t\\ 0\end{bmatrix} = \begin{bmatrix} c + \phi_1 y_{t-1} + \phi_2 y_{t-2} + \epsilon_t\\ y_{t-1}\end{bmatrix}. \]

The accompanying measurement equation for this model is used to select the first element from the state vector to be equal to the observed variable, that is

\[\begin{bmatrix} y_t\end{bmatrix} = \begin{bmatrix}1 & 0\end{bmatrix} \begin{bmatrix} y_t\\ y_{t-1}\end{bmatrix}.\]

This strategy can be generalized to any order for an autoregressive model: It is based on containing \(y_{t-1}\) up to \(y_{t-p}\) in the state vector \(a_{t-1}\), that is

\[ \begin{bmatrix} y_t\\ y_{t-1} \\ \dots \\y_{t-p+1}\end{bmatrix} = \begin{bmatrix}c\\ 0 \\ \dots \\ 0\end{bmatrix} + \begin{bmatrix} \phi_1 & \phi_2 & \dots & \phi_p\\ 1 & 0 & \dots & 0\\ \dots& \dots & \dots& \dots\\ 0 & \dots & 1 & 0 \end{bmatrix} \begin{bmatrix} y_{t-1}\\ y_{t-2}\\ \dots \\ y_{t-p}\end{bmatrix} + \begin{bmatrix} \epsilon_t\\ 0 \\ \dots \\ 0\end{bmatrix}, \]

and the measurement equation is then used to select the first element from the state vector, that is

\[\begin{bmatrix} y_t\end{bmatrix} = \begin{bmatrix}1 & 0 & \dots & 0\end{bmatrix} \begin{bmatrix} y_t\\ y_{t-1} \\ \dots \\y_{t-p+1}\end{bmatrix}.\]

This shows that any order autoregressive model can be specified in state-space format, by rewriting it as a [first-order vector autoregressive model]. It results in having elements in the state vector \(a_t\) that are actually observations made at earlier occasions. Hence, the subscripts of the elements in \(a_t\) do not necessarily match \(t\).

2.2 Moving-average models

A [moving-average model] is a weighted sum of current and past innovations, where the order determines how many past innovations are included. If you want to use a state-space formulation for such models, you have to make sure that the past innovations remain available for later observations.

A first-order moving-average model is expressed as \(y_t = c + \epsilon_t + \theta_1 \epsilon_{t-1}\). There are different ways to specify this within a state-space framework. One way of doing this, is by specifying the transition equation as

\[ \begin{bmatrix} y_t\\ \epsilon_{t} \end{bmatrix} = \begin{bmatrix}c\\ 0 \end{bmatrix} + \begin{bmatrix} 0 & \theta_1 \\0 & 0 \end{bmatrix} \begin{bmatrix} y_{t-1}\\ \epsilon_{t-1} \end{bmatrix} + \begin{bmatrix} 1&0\\1&0\end{bmatrix} \begin{bmatrix} \epsilon_{t}\\0 \end{bmatrix} = \begin{bmatrix} c + \theta_1 \epsilon_{t-1} + \epsilon_t\\ \epsilon_{t} \end{bmatrix}.\]

This example shows why the weight matrix \(G\) can be useful to have: It allows you to add the innovation \(\epsilon_t\) to both the first and second element in the state vector. This is necessary to get \(y_t\) and to incorporate its lagged version \(\epsilon_{t-1}\).

The measurement equation for this model is then used to select the first element from the state vector \(a_{t-1}\), that is

\[\begin{bmatrix} y_t\end{bmatrix} = \begin{bmatrix}1 & 0\end{bmatrix} \begin{bmatrix} y_t\\ \epsilon_{t}\end{bmatrix}.\]

This approach can be extended to any order of a moving-average model: It is based on containing the past innovations up to \(t-q\) in your state vector, that is \[ \begin{bmatrix} y_t\\ \epsilon_{t} \\ \epsilon_{t-1} \\ \dots\\ \epsilon_{t-q+1}\end{bmatrix} = \begin{bmatrix}c\\ 0 \\ 0 \\ \dots \\ 0\end{bmatrix} + \begin{bmatrix} 0 & \theta_1 & \theta_2 & \dots & \theta_q\\ 0 & 0 & 0 & \dots& 0\\ 0 & 1 & 0 & \dots& 0\\ \dots & \dots& \dots& \dots & \dots\\ 0 & 0 & \dots & 1 & 0 \end{bmatrix} \begin{bmatrix} y_{t-1}\\ \epsilon_{t-1} \\ \epsilon_{t-2} \\ \dots \\ \epsilon_{t-q} \end{bmatrix} + \begin{bmatrix} 1&0&0&\dots&0\\ 1&0&0&\dots&0\\ 0&0&0&\dots&0\\ \dots \\ 0&0&0&\dots&0\end{bmatrix} \begin{bmatrix} \epsilon_{t}\\0 \\0 \\ \dots\\0\end{bmatrix}\] \[= \begin{bmatrix} c + \theta_q \epsilon_{t-q} + \dots + \theta_2 \epsilon_{t-2} + \theta_1 \epsilon_{t-1} + \epsilon_t\\ \epsilon_{t} \\ \epsilon_{t-1}\\ \dots\\ \epsilon_{t-q+1}\end{bmatrix}.\]

The measurement equation is then as before used to select the first element from the state vector, that is,

\[\begin{bmatrix} y_t\end{bmatrix} = \begin{bmatrix}1 & 0 & 0 & \dots & 0\end{bmatrix} \begin{bmatrix} y_t\\ \epsilon_{t} \\ \epsilon_{t-1} \\ \dots\\ \epsilon_{t-q+1}\end{bmatrix}.\]

As with the higher-order autoregressive models, the higher-order moving-average models are based on having elements in the state vector \(a_t\) that are actually innovations from the past, such that their time index does not match that of the state vector itself.

2.3 Autoregressive moving-average models

Mixed [autoregressive moving-average (ARMA) models] can also be specified using the state-space framework. This requires holding both past versions of \(y_t\) and of \(\epsilon_t\) available in the state vector. For instance, a model that combines a second-order autoregression part and first-order moving average part can be specified using the transition equation

\[ \begin{bmatrix} y_t\\ y_{t-1}\\\epsilon_t\end{bmatrix} = \begin{bmatrix}c\\ 0 \\0 \end{bmatrix} + \begin{bmatrix} \phi_1 & \phi_2 & \theta_1\\ 1 & 0 & 0 \\ 0& 0& 0 \end{bmatrix} \begin{bmatrix} y_{t-1}\\ y_{t-2} \\ \epsilon_{t-1}\end{bmatrix} + \begin{bmatrix} 1 & 0 & 0\\ 0 & 0 & 0 \\ 1& 0& 0 \end{bmatrix} \begin{bmatrix} \epsilon_t\\ 0 \\ 0 \end{bmatrix}.\]

The measurement equation is then used to select the first element from the state equation, that is

\[\begin{bmatrix} y_t\end{bmatrix} = \begin{bmatrix}1 & 0 & 0\end{bmatrix}\begin{bmatrix} y_t\\ y_{t-1}\\\epsilon_t\end{bmatrix}. \] Using this approach, any order autoregressive moving-average model can be specified within this framework.

3 Multivariate time series models in state-space format

You can also use the state-space model and the Kalman filter to estimate the parameters of a multivariate time series model. Typical models in this respect are multivariate versions of the univariate models discussed above. In this section you find a few examples of this.

3.1 Vector autoregressive model

A [vector autoregressive (VAR) model] is based on having a multivariate time series \(y_t\), and regressing this vector on itself at earlier occasions. The first-order VAR model can be very easily represented in state-space format. Suppose we have three observed variables, then the transition equation for a VAR(1) model will be

\[ \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \end{bmatrix} = \begin{bmatrix} c_{1} \\ c_{2} \\ c_{3} \end{bmatrix} + \begin{bmatrix} \phi_{11} &\phi_{12} & \phi_{13}\\ \phi_{21} &\phi_{22} & \phi_{23}\\ \phi_{31} &\phi_{32} & \phi_{33}\end{bmatrix} \begin{bmatrix} y_{1t-1} \\ y_{2t-1} \\ y_{3t-1} \end{bmatrix} + \begin{bmatrix} \epsilon_{1t} \\ \epsilon_{2t} \\ \epsilon_{3t} \end{bmatrix}, \] where the subscripts of the \(\phi\) parameters indicate which variable the outcome variable is, and which one the predictor is. For instance, \(\phi_{32}\) represents the regression coefficient when regressing \(y_{3t}\) on on \(y_{2t-1}\).

The measurement equation is then based on setting \(y_t=a_t\), through

\[ \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \end{bmatrix}. \] To specify a second-order VAR model, you need to have a state vector \(a_{t-1}\) that includes not only the observations at the previous occasions, but also the ones before that. The transition equation then becomes

\[ \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \\ y_{1t-1} \\ y_{2t-1} \\ y_{3t-1} \end{bmatrix} = \begin{bmatrix} c_{1} \\ c_{2} \\ c_{3} \\ 0 \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} \phi_{11(1)} &\phi_{12(1)} & \phi_{13(1)} & \phi_{11(2)} &\phi_{12(2)} & \phi_{13(2)}\\ \phi_{21(1)} &\phi_{22(1)} & \phi_{23(1)}& \phi_{21(2)} &\phi_{22(2)} & \phi_{23(2)}\\ \phi_{31(1)} &\phi_{32(1)} & \phi_{33(1)} & \phi_{31(2)} &\phi_{32(2)} & \phi_{33(2)}\\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} y_{1t-1} \\ y_{2t-1} \\ y_{3t-1} \\ y_{1t-2} \\ y_{2t-2} \\ y_{3t-2} \end{bmatrix} + \begin{bmatrix} \epsilon_{1t} \\ \epsilon_{2t} \\ \epsilon_{3t} \\ 0 \\ 0 \\ 0 \end{bmatrix}. \] where the subscripts of the \(\phi\) parameters are now extended with the lag between parentheses. For instance, \(\phi_{13(2)}\) represents the regression coefficient when regressing \(y_{1t}\) on \(y_{3t-2}\).

The measurement equation then is

\[ \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \\ y_{1t-1} \\ y_{2t-1} \\ y_{3t-1} \end{bmatrix}. \] Higher order VAR models follow the same logic as the examples provided here.

3.2 Vector moving-average model

A [vector moving-average (VMA) model] is also based on having a vector \(y_t\) that contains multiple observed variables. These form weighted sums of their own current and past innovations, but may also include the past innovations of the other variables. A first-order VMA can be expressed using the transition equation

\[ \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \\ \epsilon_{1t} \\ \epsilon_{2t} \\ \epsilon_{3t} \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 & \theta_{11} & \theta_{12} & \theta_{13}\\ 0 & 0 & 0 & \theta_{21} & \theta_{22} & \theta_{23}\\ 0 & 0 & 0 & \theta_{31} & \theta_{32} & \theta_{33}\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} y_{1t-1} \\ y_{2t-1} \\ y_{3t-1} \\ \epsilon_{1t-1} \\ \epsilon_{2t-1} \\ \epsilon_{3t-1} \end{bmatrix} + \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix}\epsilon_{1t} \\ \epsilon_{2t} \\ \epsilon_{3t} \\ 0 \\ 0 \\ 0\end{bmatrix} . \]

The measurement equation is then formulated as

\[ \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} y_{1t} \\ y_{2t} \\ y_{3t} \\ \epsilon_{1t} \\ \epsilon_{2t} \\ \epsilon_{3t} \end{bmatrix}. \]

In a similar way, higher-order VMA models can be specified within the state-space framework.

4 Latent time series models in state-space format

When you believe that your measurements contain measurement error, the state-space model can help you to separate the true underlying fluctuations from the noise that is present in the observations. Typically, this is done when you have multiple indicators for an underlying process, using a factor analytic approach. However, it is also possible to have a single indicator that contains measurement error; under certain assumptions and circumstances, it is possible to separate the signal from the noise in this scenario as well.

4.1 Latent autoregressive models

As indicated above, the state-space framework lends itself most naturally to specify a latent autoregressive model: With the measurement equation you can relate the observed variables to one or more latent variables within the same occasion, and with the transition equation you can regress these latent variables on themselves and on each other at the preceding occasion. This may be recognized as a latent first-order VAR model.

However, it is also possible to have other models at the latent level (cf. Molenaar, 1985). Basically, all the univariate models considered above, can be extended with measurement error (either based on a single indicator, or multiple indicators). Moreover, you can consider multivariate latent versions of all these models.

A particular example of the latter is a bivariate first-order autoregressive model, where each variable is measured by a single indicators. In this case, you have two noisy observed time series \(y_{1t}\) and \(y_{2t}\), each associated with a separate latent process \(\tilde{y}_{1t}\) and \(\tilde{y}_{2t}\). The latter are regressed on themselves and each other at the previous two occasions, such that the transition equation has to be expressed as

\[\begin{bmatrix} \tilde{y}_{1t} \\ \tilde{y}_{2t} \end{bmatrix} = \begin{bmatrix}c_1\\ c_2 \end{bmatrix} + \begin{bmatrix} \phi_{11} & \phi_{12} \\ \phi_{21} & \phi_{22} \end{bmatrix} \begin{bmatrix} \tilde{y}_{1t-1} \\ \tilde{y}_{2t-1}\end{bmatrix} + \begin{bmatrix} \epsilon_{1t}\\ \epsilon_{2t} \end{bmatrix}. \] The accompanying measurement equation then is

\[\begin{bmatrix} y_{1t} \\ y_{2t} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \tilde{y}_{1t-1} \\ \tilde{y}_{2t-1}\end{bmatrix} + \begin{bmatrix} e_{1t}\\ e_{2t} \end{bmatrix} = \begin{bmatrix} \tilde{y}_{1t-1} + e_{1t} \\ \tilde{y}_{2t-1} + e_{2t}\end{bmatrix}. \]

This model (and extensions with more than two variables) can be of interest to study the reliability of single items.

4.2 Other dynamic factor models

There are various latent variable models that are characterized by dynamics. Many of these are based on having a well-known dynamic model at the latent level, such as a first- or higher-order (vector) autoregressive model. These latent processes are then measured by multiple indicators that contain measurement error as well.

An alternative dynamic factor model is what has been referred to sometimes as the white noise dynamic factor model; the name is referring to the fact that the latent series in this model is a white noise process, meaning that it is characterized by no relations over time. Instead, the dynamics over time are incorporated through having lagged factor loadings (cf. Molenaar, 1985).

To accommodate this model with a state-space framework, again you need to capture past elements of the process in the state vector. Suppose you just want to have lag 0 and lag 1 factor loadings. In this case your transition equation will be

\[\begin{bmatrix} \eta_{t} \\ \eta_{t-1} \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} \eta_{t-1} \\ \eta_{t-2}\end{bmatrix} + \begin{bmatrix} \eta_{t}\\ 0 \end{bmatrix}, \]

and the measurement equation will be

\[\begin{bmatrix} y_{1t} \\ y_{2t} \\ \dots \\ y_{kt} \end{bmatrix} = \begin{bmatrix} \lambda_{11} & \lambda_{12} \\ \lambda_{21} & \lambda_{22} \\ \dots & \dots\\ \lambda_{k1} & \lambda_{k2} \\\end{bmatrix} \begin{bmatrix} \eta_{t} \\ \eta_{t-1} \end{bmatrix} + \begin{bmatrix} e_{1t}\\ e_{2t} \\ \dots \\ e_{kt} \end{bmatrix}. \]

To ensure identification of the model, the latent white noise process needs to be scaled. This can be done by setting a single factor loading in the matrix \(S\) to 1, or by fixing the variance of \(\eta_t\) in the residual vector to 1.

5 Think more about

The state-space model and Kalman filter can be used to host many other models as well. For instance, it has been shown that exponential smoothing methods are special cases of state-space models (Durbin & Koopman, 2012; Harvey, 1989).

Moreover, there are various ways in which the basic state-space model presented in this article, can be extended. These options are described in the article about the Kalman filter, and include having exogenous variables in the model, and allowing for time-varying parameters through smoothly changing parameters and regime-switching (Kim & Nelson, 1999).

6 Takeaway

The state-space model is the basis of the Kalman filter; the latter can be used to estimate underlying latent states based on noisy data, and/or to estimate the parameters of a time series model. To be able to use the Kalman filter for these purposes, you have to specify your model of interest in state-space format. This can be somewhat challenging at times, as it requires you to tune your model to fit within the measurement and transition equations that make up the state-space model. Yet, a wide variety of time series models can be accommodated by the state-space model (Durbin & Koopman, 2012; Kim & Nelson, 1999).

7 Further reading

We have collected various topics for you to read more about below.

Read more: Estimation of time series models
Read more: Univariate time series models
Read more: Multivariate time series models
  • [Vector autoregressive (VAR) models]
  • [Vector moving-average (VMA) models]
  • [Vector autoregressive moving-average (VARMA) model]

Acknowledgments

This work was supported by the European Research Council (ERC) Consolidator Grant awarded to E. L. Hamaker (ERC-2019-COG-865468).

References

Durbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods (2nd ed.). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199641178.001.0001
Harvey, A. C. (1989). Forecasting, structural time series models and the Kalman filter. University Press. https://doi.org/10.1017/CBO9781107049994
Kim, C-J, & Nelson, C. R. (1999). State-space models with regime switching: Classical and Gibbs-sampling approaches with applications. The MIT Press. https://doi.org/10.7551/mitpress/6444.001.0001
Molenaar, P. C. M. (1985). A dynamic factor model for the analysis of multivariate time series. Psychometrika, 50, 181–202. https://doi.org/10.1007/bf02294246

Citation

BibTeX citation:
@article{hamaker2025,
  author = {Hamaker, Ellen L. and Berkhout, Sophie W.},
  title = {State-Space Model},
  journal = {MATILDA},
  number = {2025-07-11},
  date = {2025-07-11},
  url = {https://matilda.fss.uu.nl/articles/state-space-model.html},
  langid = {en}
}
For attribution, please cite this work as:
Hamaker, E. L., & Berkhout, S. W. (2025). State-space model. MATILDA, 2025-07-11. https://matilda.fss.uu.nl/articles/state-space-model.html