A decomposition of a panel of
economic and financial time series
into business and financial cycles

We introduce a multivariate linear Gaussian state space model to extract a business cycle and a financial cycle from a panel of economic and financial time series. The panel consist of quarterly observed variables and include GDP, credit, credit to GDP, credit to disposable personal income, and residential property prices. The cycles vary stochastically over time and are identified by loadings and phase shifts that are estimated by maximum likelihood. We focus on four large economies: the United States, Germany, France, and the United Kingdom. The cycle lengths are not fixed ex-ante but are estimated from the data. Recent contributions to the financial cycle literature are given by Claessens et al. (2011), Borio (2014), Strohsal et al. (2015), Drehmann et al. (2015), and Galati et al. (2015). We refer to Galati et al. (2015) for an elaborate introduction and literature review of the financial cycle.

Multivariate structural time series model
Consider a panel of $p$ time series of length $n$ which we model in a multivariate structural time series framework. Let ${\bf y}_t = (y_{1t},\ldots,y_{pt})'$ be a vector of observations at time $t$ with elements $y_{it}$ for $i = 1,\ldots,p$ and $t = 1,\ldots,n$. To obtain our trend-cycle decomposition, we use the measurement equation \begin{equation} y_{it} = \mu_{it} + \delta_i \psi_{\mathcal{A},t} + \beta_i \psi_{\mathcal{B},t} + \varepsilon_{it}, \qquad \varepsilon_{it} \stackrel{iid}{\sim} {\rm N}(0,\sigma^2_{\varepsilon,i}),\label{eq:components} \end{equation} where $\mu_{it}$ represent a, series specific, trend component for the $i$th series, and $\psi_{\kappa,t}$, $\kappa \in \{\mathcal{A},\mathcal{B}\}$ a stochastic cycle that is common to all series. The contribution of each cycle to $y_{it}$ is determined by the loadings $\delta_i \geq 0$, $\beta_i \geq 0$. The non-negative restrictions are imposed for identification reasons due to the incorporation of phase shifts of the cycles. The individual disturbance terms $\varepsilon_{it}$ are assumed to be Normally distributed and mutually uncorrelated for all time periods. Seasonal components could also be added to the decomposition in \eqref{eq:components}, see Harvey (1989) and Durbin and Koopman (2012). Since we focus on extracting cycles we do not consider seasonal components here and instead use the X12-Arima filter for seasonal adjustment before modeling the time series. Cycles can be seen as deviations from a long-term trend and often a certain level of smoothness of the trend is enforced. Therefore, we specify the trend as a local linear trend model with integrated random walk specification as given by \begin{equation} \begin{aligned} \mu_{i,t+1} &= \mu_{it} + \nu_{it}, & \\ \nu_{i,t+1} &= \nu_{it} + \xi_{it}, \qquad & \xi_{it} \stackrel{iid}{\sim} {\rm N}(0, \sigma^2_{\xi,i}), \end{aligned}\label{eq:trend} \end{equation} see Harvey (1989), and Durbin and Koopman (2012) and references therein. The missing of a disturbance term in the first equation of \eqref{eq:trend} typically leads to a much smoother trend compared to the situation where a disturbance term is included. The disturbance $\xi_{it}$ is assumed to be independent of all time periods and across all $p$ series as well as being independent of all other disturbances. The trend specification of $\eqref{eq:trend}$ is similar to the model $\Delta^m \mu^{(m)}_{i,t+1} = \xi_{it}$ with $m=2$ and $\Delta$ being the first difference operator. Higher order trends that incorporate more smoothness can be modelled by increasing $m$, see Harvey and Trimbur (2003), which leads to the specification $\mu_{it} = \mu^{(m)}_{it}$ with \begin{equation}\label{eq:trend2} \mu^{(j)}_{i,t+1} = \mu^{(j)}_{it} +\mu^{(j-1)}_{it}, \qquad \mu^{(0)}_{it} = \xi_{it}, \end{equation} for $j = m,m-1,\ldots,1$ with the disturbance term $\xi_{it}$ as in \eqref{eq:trend}. The cycles $\psi_{\mathcal{A},t}$ and $\psi_{\mathcal{B},t}$ are modelled as stationary stochastic processes following a trigonometric specification \begin{equation} \begin{pmatrix} \psi_{\kappa,t+1} \\ \psi^*_{\kappa,t+1} \end{pmatrix} = \phi_{\kappa} \begin{bmatrix} \text{cos} \lambda_{\kappa} & \text{sin} \lambda_{\kappa} \\ -\text{sin} \lambda_{\kappa} & \text{cos} \lambda_{\kappa} \end{bmatrix} \begin{pmatrix} \psi_{\kappa,t} \\ \psi^*_{\kappa,t} \end{pmatrix} + \begin{pmatrix} \omega_{\kappa,t} \\ \omega^*_{\kappa,t} \end{pmatrix}, \\ \begin{pmatrix} \omega_{\kappa,t} \\ \omega^*_{\kappa,t} % \sigma^2_{\omega,\kappa} \end{pmatrix} \stackrel{iid}{\sim} {\rm N}\left(0, \begin{bmatrix} \sigma^2_{\omega,\kappa} & 0 \\ 0 & \sigma^2_{\omega,\kappa} \end{bmatrix}\right) \label{eq:cycle} \end{equation} for $\kappa \in \{\mathcal{A},\mathcal{B}\}$, where the frequency of the cycle $\lambda_{\kappa}$ is measured in radians with $0 \leq \lambda_{\kappa} \leq \pi$ leading to a period of the stochastic cycle of $2\pi / \lambda_{\kappa}$. The disturbances $\omega_{it}$ and $\omega^*_{it}$ are mutually independent of each other at all time points and independent of all other disturbances in the model. The persistence parameter, or damping factor, $\phi_{\kappa}$ is restricted to the interval $0 < \phi_{\kappa} < 1$ to ensure stationarity. The unconditional variance of the cycle is $\sigma^2_{\psi_{\kappa}} = \sigma^2_{\omega_{\kappa}}/(1-\phi_{\kappa}^2)$ which gives an initial distribution of the cycle of $\psi_{\kappa,1} \stackrel{iid}{\sim} {\rm N}(0, \sigma^2_{\psi_{\kappa}})$ and $\psi^*_{\kappa,1} \stackrel{iid}{\sim} {\rm N}(0, \sigma^2_{\psi_{\kappa}})$, $\kappa \in \{\mathcal{A},\mathcal{B}\}$.

Phase shifts
In the decomposition of \eqref{eq:components}, all time series in the panel share two common cycle components. If the $i$th time series does not exhibit cyclical behavior, the loadings $\delta_i$ and/or $\beta_i$ can be set to, or estimated as, zero thereby eliminating the influence of the cycle. It could also be that the $i$th time series does exhibit a cyclical pattern but the peaks and troughs are shifted in time compared to the base cycles $\psi_{\mathcal{A},t}$ and $\psi_{\mathcal{B},t}$. If this is the case, loadings can be incorrectly estimated and might even be estimated as non significantly different from zero. To avoid this situation, we incorporate the phase shift methodology of Rünstler (2004) where cycle $i$ is shifted $\gamma_i$ time periods to the right for $\gamma_i > 0$ or $\gamma_i$ time periods to the left for $\gamma_i < 0$. We apply the phase shift methodology to our model by replacing the cycles, $\psi_{\mathcal{A},t}$ and $\psi_{\mathcal{B},t}$, in expression \eqref{eq:components} by \begin{equation} \text{cos}(\gamma_i \lambda_{\mathcal{A}})\psi_{\mathcal{A},t} + \text{sin}(\gamma_i \lambda_{\mathcal{A}})\psi^*_{\mathcal{A},t},\\ \text{cos}(\varphi_i \lambda_{\mathcal{B}})\psi_{\mathcal{B},t} + \text{sin}(\varphi_i \lambda_{\mathcal{B}})\psi^*_{\mathcal{B},t},\label{eq:phase} \end{equation} which gives the decomposition \begin{equation} y_{it} = \mu_{it} + \delta_i \left\{\text{cos}(\gamma_i \lambda_{\mathcal{A}})\psi_{\mathcal{A},t} + \text{sin}(\gamma_i \lambda_{\mathcal{A}})\psi^*_{\mathcal{A},t} \right\} + \\ \beta_i \left\{\text{cos}(\varphi_i \lambda_{\mathcal{B}})\psi_{\mathcal{B},t} + \text{sin}(\varphi_i \lambda_{\mathcal{B}})\psi^*_{\mathcal{B},t} \right\} + \varepsilon_{it}, \label{eq:components2} \end{equation} for $i = 1,\ldots,p$ and $t = 1,\ldots,n$ and where the phase shift $\gamma_i$ of series $i$ corresponds to cycle $\mathcal{A}$ and $\varphi_i$ to cycle $\mathcal{B}$, see also Valle et al. (2006) for the phase shift of one common cycle. Due to the periodicity of trigonometric functions, $\gamma_i$ and $\varphi_i$ are restricted to the range $-\tfrac12 \pi / \lambda_{\mathcal{A}} < \gamma_i < \tfrac12 \pi / \lambda_{\mathcal{A}}$ and $-\tfrac12 \pi / \lambda_{\mathcal{B}} < \varphi_i < \tfrac12 \pi / \lambda_{\mathcal{B}}$. Just as in Valle et al. (2006), we assume that the first equation determines the base cycle, which will be associated with the business cycle, by setting $\delta_1 = 1$ and $\gamma_1 = \beta_1 = \varphi_1 = 0$. We obtain \begin{equation}\label{eq:components3} y_{1t} = \mu_{1t} + \psi_{\mathcal{A},t} + \varepsilon_{1t}, \\ \varepsilon_{1t} \stackrel{iid}{\sim} {\rm N}(0,\sigma^2_{\varepsilon,1}), \quad t=1,\ldots,n. \end{equation} The second equation determines the base cycle, associated with the financial cycle, by setting $\beta_2 = 1$ and $\varphi_2 = 0$. Notice here that the loadings and phase shift on the business cycle for the second series of the panel can be different than zero. We have,

\begin{equation}\label{eq:components4} y_{2t} = \mu_{2t} + \delta_2 \left\{\text{cos}(\gamma_2 \lambda_{\mathcal{A}})\psi_{\mathcal{A},t} + \text{sin}(\gamma_2 \lambda_{\mathcal{A}})\psi^*_{\mathcal{A},t} \right\} + \psi_{\mathcal{B},t} + \varepsilon_{2t}, \end{equation}

For convenience, the loadings and phase shifts are collected in the matrices ${\bf \Lambda}$ and ${\bf \Gamma}$ respectively and specified as, \begin{equation}\label{eq:loadings} {\bf \Lambda} = \left[ \begin{array}{cc} 1 & 0 \\ \delta_2 & 1 \\ \delta_3 & \beta_3 \\ \vdots & \vdots \\ \delta_p & \beta_p \end{array} \right], \qquad {\bf \Gamma} = \left[ \begin{array}{cc} 0 & 0 \\ \gamma_2 & 0 \\ \gamma_3 & \varphi_3 \\ \vdots & \vdots \\ \gamma_p & \varphi_p \end{array} \right], \end{equation} where the first column of each matrix is associated with the business cycle and the second column with the financial cycle.
The financial base cycle for the variable credit can be seen by clicking the following links:

United Kingdom
United States

Linear Gaussian state space model
We consider a parametric model for the observed time series ${\bf y}_t$ that is formulated conditionally on a latent $q \times 1$ time-varying parameter vector ${\bf \alpha}_t$, for time index $t=1,\ldots,n$. We are interested in the statistical behavior of the cycles which are included in the state vector, ${\bf \alpha}_t$. A flexible modelling framework for such an analysis is the linear Gaussian state space model, the general form of which is given by \begin{equation} \begin{aligned} & {\bf y}_t = {\bf Z}_t {\bf \alpha}_t + {\bf \varepsilon}_t, \quad {\bf\varepsilon}_t \sim {\rm N}({\bf 0},{\bf H}_t), \\ & {\bf \alpha}_{t+1} ={\bf T}_t {\bf \alpha}_{t} + {\bf \eta}_t, \quad {\bf\eta}_t \sim {\rm N}({\bf 0},{\bf Q}_t), \\ & {\bf \alpha}_1 \sim p({\bf\alpha}_1;{\bf \zeta}), \end{aligned}\label{eq:stsm} \end{equation} for $t=1,\ldots,n$ and where the first equation is called the observation equation with signal ${\bf \theta}_t = {\bf Z}_t {\bf \alpha}_t$, the second equation is called the state equation with initial density $p({\bf \alpha}_1;{\bf\zeta})$ where ${\bf \zeta}$ is a static parameter vector, see for example Durbin and Koopman (2012, Part I). Minimum mean square error (MMSE) estimates of ${\bf \alpha}_t$ and MMSE forecasts for ${\bf y}_t$ can be obtained by the Kalman filter and related smoother methods. The decomposition of ${\bf y}_t$, as given in \eqref{eq:components2}, with the trend and cycle specification of \eqref{eq:trend} and \eqref{eq:cycle} can be obtained by putting them in the state space form of \eqref{eq:stsm} by defining the state vector as

\begin{equation}\label{eq:state} {\bf \alpha}_t = \begin{pmatrix} \mu_{1t} & \nu_{1t} & \ldots & \mu_{pt} & \nu_{pt} & \psi_{\mathcal{A},t} & \psi^*_{\mathcal{A},t} & \psi_{\mathcal{B},t} & \psi^*_{\mathcal{B},t} \end{pmatrix}', \end{equation}

and system matrices specified as \begin{equation}\label{eq:sysmat} \begin{aligned} & {\bf Z}_t = \left({\bf Z}_{[\mu]}, {\bf Z}_{[\psi]}\right), \\ & {\bf H}_t = \text{diag}\left[\sigma^2_{\varepsilon,1}, \ldots, \sigma^2_{\varepsilon,p} \right], \\ & {\bf T}_t = \text{diag}\left[{\bf T}_{[\mu]}, {\bf T}_{[\psi_{\mathcal{A}}]}, {\bf T}_{[\psi_{\mathcal{B}}]}\right], \\ & {\bf Q}_t = \text{diag}\left[{\bf Q}_{[\mu]}, {\bf Q}_{[\psi_{\mathcal{A}}]}, {\bf Q}_{[\psi_{\mathcal{B}}]}\right], \end{aligned} \end{equation} where $\left(A, B \right)$ denotes horizontal concatenation of the matrices $A$ and $B$ and where diag$\left[A, B \right]$ denotes a block diagonal matrix with matrices $A$ and $B$ on the diagonal. Denote $C_m$ as a $m \times m$ matrix of zeros except for the superdiagonal (diagonal above main diagonal, i.e. elements $(i, i+1)$, for $i=1,\ldots,m-1$) that has elements equal to one, with $m$ being the order of the trend in \eqref{eq:trend2}. We further specify

\begin{equation}\label{eq:sysmat2} \begin{aligned} {\bf Z_{[\mu]}} &= \left[I_p \otimes \begin{pmatrix}1 & {\bf 0}_{1 \times m} \end{pmatrix}\right], \\[1em] {\bf Z_{[\psi]}} &= \left[ \begin{array}{cccc} 1 & 0 & 0 & 0\\ \delta_2 \text{cos}(\gamma_2 \lambda_{\mathcal{A}}) & \delta_2 \text{sin}(\gamma_2 \lambda_{\mathcal{A}}) & 1 & 0\\ \vdots & \vdots & \beta_3 \text{cos}(\varphi_3 \lambda_{\mathcal{B}}) & \beta_3 \text{sin}(\varphi_3 \lambda_{\mathcal{B}})\\ \vdots & \vdots & \vdots & \vdots \\ \delta_p \text{cos}(\gamma_p \lambda_{\mathcal{A}}) & \delta_p \text{sin}(\gamma_p \lambda_{\mathcal{A}}) & \beta_p \text{cos}(\varphi_p \lambda_{\mathcal{B}}) & \beta_p \text{sin}(\varphi_p \lambda_{\mathcal{B}}) \end{array} \right], \\[1em] {\bf T}_{[\mu]} &= \left[I_p \otimes \left(I_m + C_m \right) \right], \\[1em] {\bf T}_{[\psi_{\mathcal{A}}]} &= \phi_{\mathcal{A}} \begin{bmatrix} \text{cos} \lambda_{\mathcal{A}} & \text{sin} \lambda_{\mathcal{A}} \\ -\text{sin} \lambda_{\mathcal{A}} & \text{cos} \lambda_{\mathcal{A}} \end{bmatrix}, {\bf T_{[\psi_{\mathcal{B}}]}} = \phi_{\mathcal{B}} \begin{bmatrix} \text{cos} \lambda_{\mathcal{B}} & \text{sin} \lambda_{\mathcal{B}} \\ -\text{sin} \lambda_{\mathcal{B}} & \text{cos} \lambda_{\mathcal{B}} \end{bmatrix}, \\[1em] {\bf Q}_{[\mu]} &= \text{diag} \left[ \text{diag} \begin{bmatrix} {\bf 0}_{(m-1) \times (m-1)}, \sigma^2_{\xi,1} \end{bmatrix},\ldots, \text{diag} \begin{bmatrix} {\bf 0}_{(m-1) \times (m-1)}, \sigma^2_{\xi,p} \end{bmatrix} \right], \\[1em] {\bf Q}_{[\psi \mathcal{A}]} &= \left[ \begin{array}{cc} \sigma^2_{\omega,\mathcal{A}} & 0 \\ 0 & \sigma^2_{\omega,\mathcal{A}} \end{array} \right], \qquad {\bf Q}_{[\psi \mathcal{B}]}= \left[ \begin{array}{cc} \sigma^2_{\omega,\mathcal{B}} & 0 \\ 0 & \sigma^2_{\omega,\mathcal{B}} \end{array} \right], \end{aligned} \end{equation}

with $I_m$ being the identity matrix of dimension $m$.