gEconpy.model.model.Model.linearize_model#
- Model.linearize_model(order=1, log_linearize=True, not_loglin_variables=None, steady_state=None, loglin_negative_ss=False, steady_state_kwargs=None, verbose=True, **parameter_updates)#
Linearize the model around the deterministic steady state.
- Parameters:
- order: int, default: 1
Order of the Taylor expansion to use. Currently only first order linearization is supported.
- log_linearize: bool, default: True
If True, all variables are log-linearized. If False, all variables are left in levels.
- not_loglin_variables: list of strings, optional
List of variables to not log-linearize. If provided, these variables will be left in levels, while all others will be log-linearized. Ignored if log_linearize is False.
- steady_state: dict, optional
Dictionary of steady-state values. If provided, these values will be used to linearize the model. If not provided, the steady state will be solved for using the
steady_statemethod.- loglin_negative_ss: bool, default: False
If True, variables with negative steady-state values will be log-linearized. While technically possible, this is not recommended, as it can lead to incorrect results. Ignored if log_linearize is False.
- steady_state_kwargs: dict, optional
Keyword arguments passed to the
steady_statemethod. Ignored if a steady-state solution is provided- verbose: bool, default: True
Flag indicating whether to print the linearization results to the terminal.
- parameter_updates: dict
New parameter values at which to linearize the model. Unspecified values will be taken from the initial values set in the GCN file.
Warning
If a steady state is provided, these values will not be used to update that solution! This can lead to an inconsistent linearization. The user is responsible for ensuring consistency in this case.
- Returns:
- A:
np.ndarray Jacobian matrix of the model with respect to \(x_{t+1}\) evaluated at the steady state, right-multiplied by the diagonal matrix \(T\).
- B:
np.ndarray Jacobian matrix of the model with respect to \(x_t\) evaluated at the steady state, right-multiplied by the diagonal matrix \(T\).
- C:
np.ndarray Jacobian matrix of the model with respect to \(x_{t-1}\) evaluated at the steady state, right-multiplied by the diagonal matrix \(T\).
- D:
np.ndarray Jacobian matrix of the model with respect to \(\varepsilon_t\) evaluated at the steady state.
- A:
Examples
Given a DSGE model of the form:
\[F(x_{t+1}, x_t, x_{t-1}, \varepsilon_t) = 0\]The “solution” to the model would be a policy function \(g(x_t, \varepsilon_t)\), such that:
\[x_{t+1} = g(x_t, \varepsilon_t)\]With the exception of toy models, this policy function is not available in closed form. Instead, the model is linearized around the deterministic steady state, which is a fixed point in the system of equations. The linear approximation to the model is then used to approximate the policy function. Let \(\bar{x}\) denote the deterministic steady state such that:
\[F(\bar{x}, \bar{x}, \bar{x}, 0) = 0.\]A first-order Taylor expansion about (\(\bar{x}\), \(\bar{x}\), \(\bar{x}\), 0) yields
\[A (x_{t+1} - \bar{x}) + B (x_t - \bar{x}) + C (x_{t-1} - \bar{x}) + D \varepsilon_t = 0,\]where the Jacobian matrices evaluated at the steady state are
\[A = \left .\ frac{\partial F}{\partial x_{t+1}} \right |_{(\bar{x},\bar{x},\bar{x},0)}, \quad B = \left .\ frac{\partial F}{\partial x_t} \right |_{(\bar{x},\bar{x},\bar{x},0)}, \quad C = \left .\ frac{\partial F}{\partial x_{t-1}} \right|_{(\bar{x},\bar{x},\bar{x},0)}, \quad D = \left .\ frac{\partial F}{\partial \varepsilon_t} \right|_{(\bar{x},\bar{x},\bar{x},0)}\]It is common to perform a change of variables to log-linearize the model. Define a log-state vector, \(\tilde{x}_t = \log(x_t)\), with steady state \(\tilde{x}_{ss} = \log(\bar{x})\). We get back to the original variables by exponentiating the log-state vector.
\[F(\exp(\tilde{x}_{t+1}), \exp(\tilde{x}_t), \exp(\tilde{x}_{t-1}), \varepsilon_t) = 0\]Taking derivaties with respect to \(\tilde{x}_t\), the linearized model is then:
\[ A \exp(\tilde{x}_{ss}) (\tilde{x}_{t+1} - \tilde{x}_{ss}) + B \exp(\tilde{x}_{ss}) (\tilde{x}_t - \tilde{x}_{ss}) + C \exp(\tilde{x}_{ss}) (\tilde{x}_{t-1} - \tilde{x}_{ss}) + D \varepsilon_t = 0 \]Note that \(\tilde{x} - \tilde{x}_{ss} = \log(x - \bar{x}) = \log \left (\frac{x}{\bar{x}} \right )\) is the approximate percent deviation of the variable from its steady state.
The above derivation holds on a variable-by-variable basis. Some variables can be logged and others left in levels, all that is required is right-multiplication by a diagonal matrix of the form:
\[T = \text{Diagonal}(\{h(x_1), h(x_2), \ldots, h(x_n)\})\]Where \(h(x_i) = 1\) if the variable is left in levels, and \(h(x_i) = \exp(\tilde{x}_{ss})\) if the variable is logged. This function returns the matrices \(AT\), \(BT\), \(CT\), and \(D\).