gEconpy.model.model.Model.solve_model#
- Model.solve_model(solver='cycle_reduction', log_linearize=True, not_loglin_variables=None, order=1, loglin_negative_ss=False, steady_state=None, steady_state_kwargs=None, tol=1e-08, max_iter=1000, verbose=True, on_failure='error', **parameter_updates)#
Solve for the linear approximation to the policy function via perturbation.
- Parameters:
- solver: str, default: ‘cycle_reduction’
Name of the algorithm to solve the linear solution. Currently “cycle_reduction” and “gensys” are supported. Following Dynare, cycle_reduction is the default, but note that gEcon uses gensys.
- log_linearize: bool, default: True
Whether to log-linearize the model. If False, the model will be solved in levels.
- not_loglin_variables: list of strings, optional
Variables to not log linearize when solving the model. Variables with steady state values close to zero (or negative) will be automatically selected to not log linearize. Ignored if log_linearize is False.
- order: int, default: 1
Order of taylor expansion to use to solve the model. Currently only 1st order approximation is supported.
- steady_state: dict, optional
Dictionary of steady-state solutions. If not provided, the steady state will be solved for using the
steady_statemethod.- steady_state_kwargs: dict, optional
Keyword arguments passed to the steady_state method. Ignored if a steady-state solution is provided via the steady_state argument, Default is None.
- loglin_negative_ss: bool, default is False
Whether to force log-linearization of variable with negative steady-state. This is impossible in principle (how can \(exp(x_ss)\) be negative?), but can still be done; see the docstring for
perturbation.linearize_model()for details. Use with caution, as results will not correct. Ignored if log_linearize is False.- tol: float, default 1e-8
Desired level of floating point accuracy in the solution
- max_iter: int, default: 1000
Maximum number of cycle_reduction iterations. Not used if solver is ‘gensys’.
- verbose: bool, default: True
Flag indicating whether to print solver results to the terminal
- on_failure: str, one of [‘error’, ‘ignore’], default: ‘error’
Instructions on what to do if the algorithm to find a linearized policy matrix. “Error” will raise an error, while “ignore” will return None. “ignore” is useful when repeatedly solving the model, e.g. when sampling.
- parameter_updates: dict
New parameter values at which to solve the model. Unspecified values will be taken from the initial values set in the GCN file.
- Returns:
- T:
np.ndarray, optional Transition matrix, approximated to the requested order. Represents the policy function, governing agent’s optimal state-conditional actions. If the solver fails, None is returned instead.
- R:
np.ndarray, optional Selection matrix, approximated to the requested order. Represents the state- and agent-conditional transmission of stochastic shocks through the economy. If the solver fails, None is returned instead.
- T:
Examples
This method solves the model by linearizing it around the deterministic steady state, and then solving for the policy function using a perturbation method. We begin with a model defined as a function of the form:
\[ \mathbb{E} \left [ F(x_{t+1}, x_t, x_{t-1}, \varepsilon_t) \right ] = 0 \]The linear approximation is then given by the matrices \(A\), \(B\), \(C\), and \(D\), as:
\[ A \hat{x}_{t+1} + B \hat{x}_t + C \hat{x}_{t-1} + D \varepsilon_t = 0 \]where \(\hat{x}_t = x_t - \bar{x}\) is the deviation of the state vector from its steady state (again, potentially in logs). A solution to the model seeks a function:
\[ x_t = g(x_{t-1}, \varepsilon_t) \]This implies that \(x_{t+1} = g(x_t, \varepsilon_{t+1})\), allowing us to write the model as:
\[ F_g(x_{t-1}, \varepsilon_t, \varepsilon_{t+1}) = f(g(g(x_{t-1}, \varepsilon_t), \varepsilon_{t+1}), g(x_{t-1}, \varepsilon_t), x_{t-1}, \varepsilon_t) = 0 \]To lighten notation, define:
\[ u = \varepsilon_t, \quad u_+ = \varepsilon_{t+1}, \quad \hat{x} = x_{t-1} - \bar{x} \\ f_{x_+} = \left. \frac{\partial F_g}{\partial x_{t+1}} \right |_{\bar{x}, \bar{x}, \bar{x}, 0}, \quad f_x = \left. \frac{\partial F_g}{\partial x_t} \right |_{\bar{x}, \bar{x}, \bar{x}, 0}, \\ f_{x_-} = \left. \frac{\partial F_g}{\partial x_{t-1}} \right |_{\bar{x}, \bar{x}, \bar{x}, 0}, \quad f_u = \left. \frac{\partial F_g}{\partial u} \right |_{\bar{x}, \bar{x}, \bar{x}, 0} \\ g_x = \left. \frac{\partial g}{\partial x_{t-1}} \right |_{\bar{x}, \bar{x}, \bar{x}, 0}, \quad g_u = \left. \frac{\partial g}{\partial \varepsilon_t} \right |_{\bar{x}, \bar{x}, \bar{x}, 0} \]Under this new notation, the system is:
\[ F_g(x_-, u, u_+) = f(g(g(x_-, u), u_+), g(x, u), x_-, u) = 0 \]The function \(g\) is unknown, but is implicitly defined by this expression, and can be approximated by a first order Taylor expansion around the steady state. The linearized system is then:
\[ 0 \approx F_g(x_-, u, u_+) = f_{x_+} (g_x (g_x \hat{x} + g_u u) + g_u u_+) + f_x (g_x \hat{x} + g_u u) + f_{x_-} \hat{x} + f_u u \]The Jacobian matrices \(f_{x_+}\), \(f_x\), \(f_{x_-}\), and \(f_u\) are the matrices \(A\), \(B\), \(C\), and \(D\) respectively, evaluated at the steady state, and are thus known. The task is then to solve for unknown matrices \(g_x\) and \(g_u\), which will give a linear approximation to the optimal policy function.
Take expectations, and impose that \(\mathbb{E}_t[u_+] = 0\):
\begin{align} 0 \approx {} & f_{x_+} (g_x(g_x \hat{x} + g_u u) + g_u \mathbb{E}_t[u_+]) + f_x (g_x \hat{x} + g_u u) + f_{x_-} \hat{x} + f_u u \\ \approx {} & (f_{x_+} g_x g_x + f_x g_x + f_{x_-})\hat{x} + (f_{x_+} g_x g_u + f_x g_u + f_u) u \end{align}For the system to be equal to zero, both coefficient matrices must be zero, which gives us two linear equations in the unknowns \(g_x\) and \(g_u\):
\begin{align} (f_{x_+} g_x g_x + f_x g_x + f_{x_-}) \hat{x} &= 0 \\ (f_{x_+} g_x g_u + f_x g_u + f_u) u &= 0 \end{align}Assuming \(g_x\) has been solved for, the coefficient in the second equation can be directly solved for, giving:
\[ g_u = -(f_{x_+} g_x + f_x)^{-1} f_u = 0 \]The first equation, on the other hand, is a quadratic in \(g_x\), and cannot be solved for directly. Instead, we employ trickery. Then the equation can be re-written as a linear system in two states:
\begin{align} \begin{bmatrix} 0 & f_{x_+} \\ I & 0 \end{bmatrix} \begin{bmatrix} g_x g_x \\ g_x \end{bmatrix} \hat{x} &= \begin{bmatrix} -f_x & -f_{x_-} \\ I & 0 \end{bmatrix} \begin{bmatrix} g_x \\ I \end{bmatrix} \hat{x} \\ D \begin{bmatrix} I \\ g_x \end{bmatrix} g_x \hat{x} &= E \begin{matrix} g_x \\ I \end{matrix} \hat{x} \\ QTZ \begin{bmatrix} I \\ g_x \end{bmatrix} g_x \hat{x} &= QSZ \begin{bmatrix} g_x \\ I \end{bmatrix} \hat{x} \\ TZ \begin{bmatrix} I \\ g_x \end{bmatrix} g_x \hat{x} &= SZ \begin{bmatrix} g_x \\ I \end{bmatrix} \hat{x} \end{align}The last two lines use the QZ decomposition of the pencil \(<D, E>\) into upper triangular matrix \(T\) and quasi-upper triangular matrix \(S\), and the orthogonal matrices \(Z\) and \(Q\). \(T\) and \(S\) have structure that can be exploited. In particular, they are arranged so that the eigenvalues of the pencil \(<D, E>\) are sorted in modulus from smallest (stable) to largest (unstable).
Partitioning the rows of the matrices by eign-stability, and the columns by the size of \(g_x\), we get:
\[ \begin{bmatrix} T_{11} & T_{12} \\ 0 & T_{22} \end{bmatrix} \begin{bmatrix} Z_{11} & Z_{12} \\ Z_{21} & Z_{22} \end{bmatrix} \begin{bmatrix} I \\ g_x \end{bmatrix} g_x \hat{x} = \begin{bmatrix} S_{11} & S_{12} \\ 0 & S_{22} \end{bmatrix} \begin{bmatrix} Z_{11} & Z_{12} \\ Z_{21} & Z_{22} \end{bmatrix} \begin{bmatrix} g_x \\ I \end{bmatrix} \hat{x} \]For the system to the stable, we require that:
\[ Z_{21} + Z_{22} g_x = 0 \]And thus:
\[ g_x = -Z_{22}^{-1} Z_{21} \]This requires that -Z_{22} is square and invertible, which are known as the rank and stability conditions of Blanchard and Kahn (1980). If these conditions are not met, the model is indeterminate, and a solution is not possible.