the-model-file.rst
The model file
Conventions
A model file contains a list of commands and of blocks. Each command
and each element of a block is terminated by a semicolon (;). Blocks
are terminated by end;
.
If Dynare encounters an unknown expression at the beginning of a line or after a semicolon, it will parse the rest of that line as native MATLAB code, even if there are more statements separated by semicolons present. To prevent cryptic error messages, it is strongly recommended to always only put one statement/command into each line and start a new line after each semicolon. [1]
Lines of codes can be commented out line by line or as a block. Single-line
comments begin with //
and stop at the end of the line. Multiline comments
are introduced by /*
and terminated by */
.
Examples
// This is a single line comment
var x; // This is a comment about x
/* This is another inline comment about alpha */ alpha = 0.3;
/* This comment is spanning two lines. */
Note that these comment marks should not be used in native MATLAB code regions
where the % should be preferred instead to introduce a comment. In a
verbatim
block, see :ref:`verbatim`, this would result in a crash since
//
is not a valid MATLAB statement).
Most Dynare commands have arguments and several accept options, indicated in parentheses after the command keyword. Several options are separated by commas.
In the description of Dynare commands, the following conventions are observed:
- Optional arguments or options are indicated between square brackets: ‘[]’;
- Repeated arguments are indicated by ellipses: “...”;
- Mutually exclusive arguments are separated by vertical bars: ‘|’;
- INTEGER indicates an integer number;
- INTEGER_VECTOR indicates a vector of integer numbers separated by spaces, enclosed by square brackets;
- DOUBLE indicates a double precision number. The following syntaxes
are valid:
1.1e3
,1.1E3
,1.1d3
,1.1D3
. In some places, infinite ValuesInf
and-Inf
are also allowed; - NUMERICAL_VECTOR indicates a vector of numbers separated by spaces, enclosed by square brackets;
- EXPRESSION indicates a mathematical expression valid outside the model description (see :ref:`expr`);
- MODEL_EXPRESSION (sometimes MODEL_EXP) indicates a mathematical expression valid in the model description (see :ref:`expr` and :ref:`model-decl`);
- MACRO_EXPRESSION designates an expression of the macro processor (see :ref:`macro-exp`);
- VARIABLE_NAME (sometimes VAR_NAME) indicates a variable name starting with an alphabetical character and can’t contain: ‘()+-*/^=!;:@#.’ or accentuated characters;
- PARAMETER_NAME (sometimes PARAM_NAME) indicates a parameter name starting with an alphabetical character and can’t contain: ‘()+-*/^=!;:@#.’ or accentuated characters;
- LATEX_NAME (sometimes TEX_NAME) indicates a valid LaTeX expression in math mode (not including the dollar signs);
- FUNCTION_NAME indicates a valid MATLAB function name;
- FILENAME indicates a filename valid in the underlying operating system; it is necessary to put it between quotes when specifying the extension or if the filename contains a non-alphanumeric character;
- QUOTED_STRING indicates an arbitrary string enclosed between (single) quotes.
Variable declarations
While Dynare allows the user to choose their own variable names, there
are some restrictions to be kept in mind. First, variables and
parameters must not have the same name as Dynare commands or built-in
functions. In this respect, Dynare is not case-sensitive. For example,
do not use Ln
or Sigma_e
to name your variable. Not conforming
to this rule might yield hard-to-debug error messages or
crashes. Second, to minimize interference with MATLAB or Octave
functions that may be called by Dynare or user-defined steady state
files, it is recommended to avoid using the name of MATLAB
functions. In particular when working with steady state files, do not
use correctly-spelled greek names like alpha, because there are
MATLAB functions of the same name. Rather go for alppha
or
alph
. Lastly, please do not name a variable or parameter
i
. This may interfere with the imaginary number i and the index in
many loops. Rather, name investment invest
. Using inv
is also
not recommended as it already denotes the inverse operator. Commands
for declaring variables and parameters are described below.
On-the-fly Model Variable Declaration
Endogenous variables, exogenous variables, and parameters can also be declared inside the model block. You can do this in two different ways: either via the equation tag or directly in an equation.
To declare a variable on-the-fly in an equation tag, simply state the type of
variable to be declared (endogenous
, exogenous
, or
parameter
followed by an equal sign and the variable name in single
quotes. Hence, to declare a variable c
as endogenous in an equation tag,
you can type [endogenous='c']
.
To perform on-the-fly variable declaration in an equation, simply follow the
symbol name with a vertical line (|
, pipe character) and either an e
, an
x
, or a p
. For example, to declare a parameter named
alphaa
in the model block, you could write alphaa|p
directly in
an equation where it appears. Similarly, to declare an endogenous variable
c
in the model block you could write c|e
. Note that in-equation
on-the-fly variable declarations must be made on contemporaneous variables.
On-the-fly variable declarations do not have to appear in the first place where this variable is encountered.
Example
The following two snippets are equivalent:
model; [endogenous='k',name='law of motion of capital'] k(+1) = i|e + (1-delta|p)*k; y|e = k^alpha|p; ... end; delta = 0.025; alpha = 0.36;
var k, i, y; parameters delta, alpha; delta = 0.025; alpha = 0.36; ... model; [name='law of motion of capital'] k(1) = i|e + (1-delta|p)*k; y|e = k|e^alpha|p; ... end;
Expressions
Dynare distinguishes between two types of mathematical expressions: those that are used to describe the model, and those that are used outside the model block (e.g. for initializing parameters or variables, or as command options). In this manual, those two types of expressions are respectively denoted by MODEL_EXPRESSION and EXPRESSION.
Unlike MATLAB or Octave expressions, Dynare expressions are necessarily scalar ones: they cannot contain matrices or evaluate to matrices. [2]
Expressions can be constructed using integers (INTEGER), floating point numbers (DOUBLE), parameter names (PARAMETER_NAME), variable names (VARIABLE_NAME), operators and functions.
The following special constants are also accepted in some contexts:
Parameters and variables
Parameters and variables can be introduced in expressions by simply typing their names. The semantics of parameters and variables is quite different whether they are used inside or outside the model block.
Inside the model
Parameters used inside the model refer to the value given through
parameter initialization (see :ref:`param-init`) or homotopy_setup
when doing a simulation, or are the estimated variables when doing an
estimation.
Variables used in a MODEL_EXPRESSION denote current period values when
neither a lead or a lag is given. A lead or a lag can be given by
enclosing an integer between parenthesis just after the variable name:
a positive integer means a lead, a negative one means a lag. Leads or
lags of more than one period are allowed. For example, if c
is an
endogenous variable, then c(+1)
is the variable one period ahead,
and c(-2)
is the variable two periods before.
When specifying the leads and lags of endogenous variables, it is important to respect the following convention: in Dynare, the timing of a variable reflects when that variable is decided. A control variable — which by definition is decided in the current period — must have no lead. A predetermined variable — which by definition has been decided in a previous period — must have a lag. A consequence of this is that all stock variables must use the “stock at the end of the period” convention.
Leads and lags are primarily used for endogenous variables, but can be used for exogenous variables. They have no effect on parameters and are forbidden for local model variables (see Model declaration).
Outside the model
When used in an expression outside the model block, a parameter or a
variable simply refers to the last value given to that variable. More
precisely, for a parameter it refers to the value given in the
corresponding parameter initialization (see :ref:`param-init`); for an
endogenous or exogenous variable, it refers to the value given in the
most recent initval
or endval
block.
Operators
The following operators are allowed in both MODEL_EXPRESSION and EXPRESSION:
- Binary arithmetic operators:
+
,-
,*
,/
,^
- Unary arithmetic operators:
+
,-
- Binary comparison operators (which evaluate to either 0 or 1):
<
,>
,<=
,>=
,==
,!=
Note the binary comparison operators are differentiable everywhere except on a line of the 2-dimensional real plane. However for facilitating convergence of Newton-type methods, Dynare assumes that, at the points of non-differentiability, the partial derivatives of these operators with respect to both arguments is equal to 0 (since this is the value of the partial derivatives everywhere else).
The following special operators are accepted in MODEL_EXPRESSION (but not in EXPRESSION):
Functions
Built-in functions
The following standard functions are supported internally for both MODEL_EXPRESSION and EXPRESSION:
External functions
Any other user-defined (or built-in) MATLAB or Octave function may be used in both a MODEL_EXPRESSION and an EXPRESSION, provided that this function has a scalar argument as a return value.
To use an external function in a MODEL_EXPRESSION, one must declare
the function using the external_function
statement. This is not
required for external functions used in an EXPRESSION outside of a
model
block or steady_state_model
block.
A few words of warning in stochastic context
The use of the following functions and operators is strongly
discouraged in a stochastic context: max
, min
, abs
,
sign
, <
, >
, <=
, >=
, ==
, !=
.
The reason is that the local approximation used by stoch_simul
or
estimation
will by nature ignore the non-linearities introduced by
these functions if the steady state is away from the kink. And, if the
steady state is exactly at the kink, then the approximation will be
bogus because the derivative of these functions at the kink is bogus
(as explained in the respective documentations of these functions and
operators).
Note that extended_path
is not affected by this problem, because
it does not rely on a local approximation of the mode.
Parameter initialization
When using Dynare for computing simulations, it is necessary to calibrate the parameters of the model. This is done through parameter initialization.
The syntax is the following:
PARAMETER_NAME = EXPRESSION;
Here is an example of calibration:
parameters alpha, beta;
beta = 0.99;
alpha = 0.36;
A = 1-alpha*beta;
Internally, the parameter values are stored in M_.params
:
The parameter names are stored in M_.param_names
:
Model declaration
The model is declared inside a model
block:
Dynare has the ability to output the original list of model equations
to a LaTeX file, using the write_latex_original_model
command, the list of transformed model equations using the
write_latex_dynamic_model command
, and the list of static model
equations using the write_latex_static_model
command.
Auxiliary variables
The model which is solved internally by Dynare is not exactly the model declared by the user. In some cases, Dynare will introduce auxiliary endogenous variables—along with corresponding auxiliary equations—which will appear in the final output.
The main transformation concerns leads and lags. Dynare will perform a transformation of the model so that there is only one lead and one lag on endogenous variables and no leads/lags on exogenous variables.
This transformation is achieved by the creation of auxiliary variables
and corresponding equations. For example, if x(+2)
exists in the
model, Dynare will create one auxiliary variable AUX_ENDO_LEAD =
x(+1)
, and replace x(+2)
by AUX_ENDO_LEAD(+1)
.
A similar transformation is done for lags greater than 2 on endogenous
(auxiliary variables will have a name beginning with
AUX_ENDO_LAG
), and for exogenous with leads and lags (auxiliary
variables will have a name beginning with AUX_EXO_LEAD
or
AUX_EXO_LAG
respectively).
Another transformation is done for the EXPECTATION
operator. For
each occurrence of this operator, Dynare creates an auxiliary variable
defined by a new equation, and replaces the expectation operator by a
reference to the new auxiliary variable. For example, the expression
EXPECTATION(-1)(x(+1))
is replaced by AUX_EXPECT_LAG_1(-1)
,
and the new auxiliary variable is declared as AUX_EXPECT_LAG_1 =
x(+2)
.
Auxiliary variables are also introduced by the preprocessor for the
ramsey_model
and ramsey_policy
commands. In this case, they
are used to represent the Lagrange multipliers when first order
conditions of the Ramsey problem are computed. The new variables take
the form MULT_i
, where i represents the constraint with which
the multiplier is associated (counted from the order of declaration in
the model block).
Auxiliary variables are also introduced by the
differentiate_forward_vars
option of the model block. The new
variables take the form AUX_DIFF_FWRD_i
, and are equal to
x-x(-1)
for some endogenous variable x
.
Finally, auxiliary variables will arise in the context of employing the
diff
-operator.
Once created, all auxiliary variables are included in the set of endogenous variables. The output of decision rules (see below) is such that auxiliary variable names are replaced by the original variables they refer to.
The number of endogenous variables before the creation of auxiliary
variables is stored in M_.orig_endo_nbr
, and the number of
endogenous variables after the creation of auxiliary variables is
stored in M_.endo_nbr
.
See https://git.dynare.org/Dynare/dynare/-/wikis/Auxiliary-variables for more technical details on auxiliary variables.
Initial and terminal conditions
For most simulation exercises, it is necessary to provide initial (and possibly terminal) conditions. It is also necessary to provide initial guess values for non-linear solvers. This section describes the statements used for those purposes.
In many contexts (deterministic or stochastic), it is necessary to
compute the steady state of a non-linear model: initval
then
specifies numerical initial values for the non-linear solver. The
command resid
can be used to compute the equation residuals for
the given initial values.
Used in perfect foresight mode, the types of forward-looking models for which Dynare was designed require both initial and terminal conditions. Most often these initial and terminal conditions are static equilibria, but not necessarily.
One typical application is to consider an economy at the equilibrium
at time 0, trigger a shock in first period, and study the trajectory
of return to the initial equilibrium. To do that, one needs
initval
and shocks
(see :ref:`shocks-exo`).
Another one is to study how an economy, starting from arbitrary
initial conditions at time 0 converges towards equilibrium. In this
case models, the command histval
permits to specify different
historical initial values for variables with lags for the periods
before the beginning of the simulation. Due to the design of Dynare,
in this case initval
is used to specify the terminal conditions.
Shocks on exogenous variables
In a deterministic context, when one wants to study the transition of
one equilibrium position to another, it is equivalent to analyze the
consequences of a permanent shock and this in done in Dynare through
the proper use of initval
and endval
.
Another typical experiment is to study the effects of a temporary
shock after which the system goes back to the original equilibrium (if
the model is stable...). A temporary shock is a temporary change of
value of one or several exogenous variables in the model. Temporary
shocks are specified with the command shocks
.
In a stochastic framework, the exogenous variables take random values
in each period. In Dynare, these random values follow a normal
distribution with zero mean, but it belongs to the user to specify the
variability of these shocks. The non-zero elements of the matrix of
variance-covariance of the shocks can be entered with the shocks
command. Or, the entire matrix can be directly entered with
Sigma_e
(this use is however deprecated).
If the variance of an exogenous variable is set to zero, this variable will appear in the report on policy and transition functions, but isn’t used in the computation of moments and of Impulse Response Functions. Setting a variance to zero is an easy way of removing an exogenous shock.
Note that, by default, if there are several shocks
or mshocks
blocks in the same .mod
file, then they are cumulative: all the
shocks declared in all the blocks are considered; however, if a
shocks
or mshocks
block is declared with the overwrite
option, then it replaces all the previous shocks
and mshocks
blocks.
Other general declarations
Steady state
There are two ways of computing the steady state (i.e. the static equilibrium) of a model. The first way is to let Dynare compute the steady state using a nonlinear Newton-type solver; this should work for most models, and is relatively simple to use. The second way is to give more guidance to Dynare, using your knowledge of the model, by providing it with a method to compute the steady state, either using a steady_state_model block or writing matlab routine.
Finding the steady state with Dynare nonlinear solver
After computation, the steady state is available in the following variable:
Providing the steady state to Dynare
If you know how to compute the steady state for your model, you can
provide a MATLAB/Octave function doing the computation instead of
using steady
. Again, there are two options for doing that:
- The easiest way is to write a
steady_state_model
block, which is described below in more details. See alsofs2000.mod
in theexamples
directory for an example. The steady state file generated by Dynare will be called+FILENAME/steadystate.m.
- You can write the corresponding MATLAB function by hand. If your MOD-file is called
FILENAME.mod
, the steady state file must be calledFILENAME_steadystate.m
. SeeNK_baseline_steadystate.m
in the examples directory for an example. This option gives a bit more flexibility (loops and conditional structures can be used), at the expense of a heavier programming burden and a lesser efficiency.
Note that both files allow to update parameters in each call of the
function. This allows for example to calibrate a model to a labor
supply of 0.2 in steady state by setting the labor disutility
parameter to a corresponding value (see NK_baseline_steadystate.m
in the examples
directory). They can also be used in estimation
where some parameter may be a function of an estimated parameter and
needs to be updated for every parameter draw. For example, one might
want to set the capital utilization cost parameter as a function of
the discount rate to ensure that capacity utilization is 1 in steady
state. Treating both parameters as independent or not updating one as
a function of the other would lead to wrong results. But this also
means that care is required. Do not accidentally overwrite your
parameters with new values as it will lead to wrong results.
Replace some equations during steady state computations
When there is no steady state file, Dynare computes the steady state
by solving the static model, i.e. the model from the .mod
file
from which leads and lags have been removed.
In some specific cases, one may want to have more control over the way this static model is created. Dynare therefore offers the possibility to explicitly give the form of equations that should be in the static model.
More precisely, if an equation is prepended by a [static]
tag,
then it will appear in the static model used for steady state
computation, but that equation will not be used for other
computations. For every equation tagged in this way, you must tag
another equation with [dynamic]
: that equation will not be used
for steady state computation, but will be used for other computations.
This functionality can be useful on models with a unit root, where
there is an infinity of steady states. An equation (tagged
[dynamic]
) would give the law of motion of the nonstationary
variable (like a random walk). To pin down one specific steady state,
an equation tagged [static]
would affect a constant value to the
nonstationary variable. Another situation where the [static]
tag
can be useful is when one has only a partial closed form solution for
the steady state.
Example
This is a trivial example with two endogenous variables. The second equation takes a different form in the static model:
var c k;
varexo x;
...
model;
c + k - aa*x*k(-1)^alph - (1-delt)*k(-1);
[dynamic] c^(-gam) - (1+bet)^(-1)*(aa*alph*x(+1)*k^(alph-1) + 1 - delt)*c(+1)^(-gam);
[static] k = ((delt+bet)/(x*aa*alph))^(1/(alph-1));
end;
Getting information about the model
Deterministic simulation
When the framework is deterministic, Dynare can be used for models
with the assumption of perfect foresight. Typically, the system is
supposed to be in a state of equilibrium before a period ‘1’ when the
news of a contemporaneous or of a future shock is learned by the
agents in the model. The purpose of the simulation is to describe the
reaction in anticipation of, then in reaction to the shock, until the
system returns to the old or to a new state of equilibrium. In most
models, this return to equilibrium is only an asymptotic phenomenon,
which one must approximate by an horizon of simulation far enough in
the future. Another exercise for which Dynare is well suited is to
study the transition path to a new equilibrium following a permanent
shock. For deterministic simulations, the numerical problem consists
of solving a nonlinar system of simultaneous equations in n endogenous
variables in T periods. Dynare offers several algorithms for solving
this problem, which can be chosen via the stack_solve_algo
option. By default (stack_solve_algo=0
), Dynare uses a Newton-type
method to solve the simultaneous equation system. Because the
resulting Jacobian is in the order of n
by T
and hence will be
very large for long simulations with many variables, Dynare makes use
of the sparse matrix capacities of MATLAB/Octave. A slower but
potentially less memory consuming alternative (stack_solve_algo=6
)
is based on a Newton-type algorithm first proposed by Laffargue
(1990) and Boucekkine (1995), which uses relaxation
techniques. Thereby, the algorithm avoids ever storing the full
Jacobian. The details of the algorithm can be found in Juillard
(1996). The third type of algorithms makes use of block decomposition
techniques (divide-and-conquer methods) that exploit the structure of
the model. The principle is to identify recursive and simultaneous
blocks in the model structure and use this information to aid the
solution process. These solution algorithms can provide a significant
speed-up on large models.
Warning
Be careful when employing auxiliary variables in the context
of perfect foresight computations. The same model may work for stochastic
simulations, but fail for perfect foresight simulations. The issue arises
when an equation suddenly only contains variables dated t+1
(or t-1
for that matter). In this case, the derivative in the last (first) period
with respect to all variables will be 0, rendering the stacked Jacobian singular.
Example
Consider the following specification of an Euler equation with log utility:
Lambda = beta*C(-1)/C; Lambda(+1)*R(+1)= 1;
Clearly, the derivative of the second equation with respect to all endogenous variables at time
t
is zero, causingperfect_foresight_solver
to generally fail. This is due to the use of the Lagrange multiplierLambda
as an auxiliary variable. Instead, employing the identicalbeta*C/C(+1)*R(+1)= 1;
will work.
Stochastic solution and simulation
In a stochastic context, Dynare computes one or several simulations corresponding to a random draw of the shocks.
The main algorithm for solving stochastic models relies on a Taylor
approximation, up to third order, of the expectation functions (see
Judd (1996), Collard and Juillard (2001a, 2001b), and
Schmitt-Grohé and Uríbe (2004)). The details of the
Dynare implementation of the first order solution are given in
Villemot (2011). Such a solution is computed using the
stoch_simul
command.
As an alternative, it is possible to compute a simulation to a
stochastic model using the extended path method presented by Fair
and Taylor (1983). This method is especially useful when there are
strong nonlinearities or binding constraints. Such a solution is
computed using the extended_path
command.
Computing the stochastic solution
The approximated solution of a model takes the form of a set of
decision rules or transition equations expressing the current value of
the endogenous variables of the model as function of the previous
state of the model and shocks observed at the beginning of the
period. The decision rules are stored in the structure oo_.dr
which is described below.
Typology and ordering of variables
Dynare distinguishes four types of endogenous variables:
Purely backward (or purely predetermined) variables
Those that appear only at current and past period in the model,
but not at future period (i.e. at t and t-1 but
not t+1). The number of such variables is equal to
M_.npred
.
Purely forward variables
Those that appear only at current and future period in the model,
but not at past period (i.e. at t and t+1 but not
t-1). The number of such variables is stored in
M_.nfwrd
.
Mixed variables
Those that appear at current, past and future period in the model
(i.e. at t, t+1 and t-1). The number of
such variables is stored in M_.nboth
.
Static variables
Those that appear only at current, not past and future period in
the model (i.e. only at t, not at t+1 or
t-1). The number of such variables is stored in
M_.nstatic
.
Note that all endogenous variables fall into one of these four categories, since after the creation of auxiliary variables (see :ref:`aux-variables`), all endogenous have at most one lead and one lag. We therefore have the following identity:
M_.npred + M_.both + M_.nfwrd + M_.nstatic = M_.endo_nbr
Internally, Dynare uses two orderings of the endogenous variables: the
order of declaration (which is reflected in M_.endo_names
), and an
order based on the four types described above, which we will call the
DR-order (“DR” stands for decision rules). Most of the time, the
declaration order is used, but for elements of the decision rules, the
DR-order is used.
The DR-order is the following: static variables appear first, then purely backward variables, then mixed variables, and finally purely forward variables. Inside each category, variables are arranged according to the declaration order.
In other words, the k-th variable in the DR-order corresponds to the endogenous
variable numbered oo_.dr.order_var(k)
in declaration
order. Conversely, k-th declared variable is numbered
oo_.dr.inv_order_var(k)
in DR-order.
Finally, the state variables of the model are the purely backward
variables and the mixed variables. They are ordered in DR-order when
they appear in decision rules elements. There are M_.nspred =
M_.npred + M_.nboth
such variables. Similarly, one has M_.nsfwrd =
M_.nfwrd + M_.nboth
, and M_.ndynamic = M_.nfwrd + M_.nboth +
M_.npred
.
First-order approximation
The approximation has the stylized form:
y_t = y^s + A y^h_{t-1} + B u_t
where y^s is the steady state value of y and y^h_t=y_t-y^s.
The coefficients of the decision rules are stored as follows:
-
y^s is stored in
oo_.dr.ys
. The vector rows correspond to all endogenous in the declaration order. -
A is stored in
oo_.dr.ghx
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to state variables in DR-order, as given byoo_.dr.state_var
. (N.B.: if theblock
option to themodel
block has been specified, then rows are in declaration order, and columns are ordered according tooo_.dr.state_var
which may differ from DR-order.) -
B is stored
oo_.dr.ghu
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to exogenous variables in declaration order. (N.B.: if theblock
option to themodel
block has been specified, then rows are in declaration order.)
Of course, the shown form of the approximation is only stylized, because it neglects the required different ordering in y^s and y^h_t. The precise form of the approximation that shows the way Dynare deals with differences between declaration and DR-order, is
y_t(\mathrm{oo\_.dr.order\_var}) = y^s(\mathrm{oo\_.dr.order\_var}) + A \cdot y_{t-1}(\mathrm{oo\_.dr.order\_var(k2)}) - y^s(\mathrm{oo\_.dr.order\_var(k2)}) + B\cdot u_t
where \mathrm{k2} selects the state variables, y_t and y^s are in declaration order and the coefficient matrices are in DR-order. Effectively, all variables on the right hand side are brought into DR order for computations and then assigned to y_t in declaration order.
Second-order approximation
The approximation has the form:
y_t = y^s + 0.5 \Delta^2 + A y^h_{t-1} + B u_t + 0.5 C (y^h_{t-1}\otimes y^h_{t-1}) + 0.5 D (u_t \otimes u_t) + E (y^h_{t-1} \otimes u_t)
where y^s is the steady state value of y, y^h_t=y_t-y^s, and \Delta^2 is the shift effect of the variance of future shocks. For the reordering required due to differences in declaration and DR order, see the first order approximation.
The coefficients of the decision rules are stored in the variables described for first order approximation, plus the following variables:
-
\Delta^2 is stored in
oo_.dr.ghs2
. The vector rows correspond to all endogenous in DR-order. -
C is stored in
oo_.dr.ghxx
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to the Kronecker product of the vector of state variables in DR-order. -
D is stored in
oo_.dr.ghuu
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to the Kronecker product of exogenous variables in declaration order. -
E is stored in
oo_.dr.ghxu
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to the Kronecker product of the vector of state variables (in DR-order) by the vector of exogenous variables (in declaration order).
Third-order approximation
The approximation has the form:
y_t = y^s + G_0 + G_1 z_t + G_2 (z_t \otimes z_t) + G_3 (z_t \otimes z_t \otimes z_t)
where y^s is the steady state value of y, and
z_t is a vector consisting of the deviation from the steady
state of the state variables (in DR-order) at date t-1
followed by the exogenous variables at date t (in declaration
order). The vector z_t is therefore of size n_z =
M_.nspred
+ M_.exo_nbr
.
The coefficients of the decision rules are stored as follows:
-
y^s is stored in
oo_.dr.ys
. The vector rows correspond to all endogenous in the declaration order. -
G_0 is stored in
oo_.dr.g_0
. The vector rows correspond to all endogenous in DR-order. -
G_1 is stored in
oo_.dr.g_1
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to state variables in DR-order, followed by exogenous in declaration order. -
G_2 is stored in
oo_.dr.g_2
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to the Kronecker product of state variables (in DR-order), followed by exogenous (in declaration order). Note that the Kronecker product is stored in a folded way, i.e. symmetric elements are stored only once, which implies that the matrix has n_z(n_z+1)/2 columns. More precisely, each column of this matrix corresponds to a pair (i_1, i_2) where each index represents an element of z_t and is therefore between 1 and n_z. Only non-decreasing pairs are stored, i.e. those for which i_1 \leq i_2. The columns are arranged in the lexicographical order of non-decreasing pairs. Also note that for those pairs where i_1 \neq i_2, since the element is stored only once but appears two times in the unfolded G_2 matrix, it must be multiplied by 2 when computing the decision rules. -
G_3 is stored in
oo_.dr.g_3
. The matrix rows correspond to all endogenous in DR-order. The matrix columns correspond to the third Kronecker power of state variables (in DR-order), followed by exogenous (in declaration order). Note that the third Kronecker power is stored in a folded way, i.e. symmetric elements are stored only once, which implies that the matrix has n_z(n_z+1)(n_z+2)/6 columns. More precisely, each column of this matrix corresponds to a tuple (i_1, i_2, i_3) where each index represents an element of z_t and is therefore between 1 and n_z. Only non-decreasing tuples are stored, i.e. those for which i_1 \leq i_2 \leq i_3. The columns are arranged in the lexicographical order of non-decreasing tuples. Also note that for tuples that have three distinct indices (i.e. i_1 \neq i_2 and i_1 \neq i_3 and i_2 \neq i_3), since these elements are stored only once but appears six times in the unfolded G_3 matrix, they must be multiplied by 6 when computing the decision rules. Similarly, for those tuples that have two equal indices (i.e. of the form (a,a,b) or (a,b,a) or (b,a,a)), since these elements are stored only once but appears three times in the unfolded G_3 matrix, they must be multiplied by 3 when computing the decision rules.
Higher-order approximation
Higher-order approximations are simply a generalization of what is done at order 3.
The steady state is stored in oo_.dr.ys
and the constant correction is
stored in oo_.dr.g_0
. The coefficient for orders 1, 2, 3, 4… are
respectively stored in oo_.dr.g_0
, oo_.dr.g_1
, oo_.dr.g_2
,
oo_.dr.g_3
, oo_.dr.g_4
… The columns of those matrices correspond to
multidimensional indices of state variables, in such a way that symmetric
elements are never repeated (for more details, see the description of
oo_.dr.g_3
in the third-order case).
Occasionally binding constraints (OCCBIN)
Dynare allows simulating models with up to two occasionally-binding constraints by relying on a piecewise linear solution as in Guerrieri and Iacoviello (2015). It also allows estimating such models employing either the inversion filter of Cuba-Borda, Guerrieri, and Iacoviello (2019) or the piecewise Kalman filter of Giovannini, Pfeiffer, and Ratto (2021). To trigger computations involving occasionally-binding constraints requires
- defining and naming the occasionally-binding constraints using an
occbin_constraints
-block - specifying the model equations for the respective regimes in the
model
-block using appropriate equation tags. - potentially specifying a sequence of surprise shocks using a
shocks(surprise)
-block - setting up Occbin simulations or estimation with
occbin_setup
- triggering a simulation with
occbin_solver
or runningestimation
orcalib_smoother
.
All of these elements are discussed in the following.
Estimation based on likelihood
Provided that you have observations on some endogenous variables, it is possible to use Dynare to estimate some or all parameters. Both maximum likelihood (as in Ireland (2004)) and Bayesian techniques (as in Fernández-Villaverde and Rubio-Ramírez (2004), Rabanal and Rubio-Ramirez (2003), Schorfheide (2000) or Smets and Wouters (2003)) are available. Using Bayesian methods, it is possible to estimate DSGE models, VAR models, or a combination of the two techniques called DSGE-VAR.
Note that in order to avoid stochastic singularity, you must have at least as many shocks or measurement errors in your model as you have observed variables.
The estimation using a first order approximation can benefit from the block decomposition of the model (see :opt:`block`).
Dynare also has the ability to estimate Bayesian VARs:
Estimation based on moments
Provided that you have observations on some endogenous variables, it is possible to use Dynare to estimate some or all parameters using a method of moments approach. Both the Simulated Method of Moments (SMM) and the Generalized Method of Moments (GMM) are available. The general idea is to minimize the distance between unconditional model moments and corresponding data moments (so called orthogonality or moment conditions). For SMM, Dynare computes model moments via stochastic simulations based on the perturbation approximation up to any order, whereas for GMM model moments are computed in closed-form based on the pruned state-space representation of the perturbation solution up to third order. The implementation of SMM is inspired by Born and Pfeifer (2014) and Ruge-Murcia (2012), whereas the one for GMM is adapted from Andreasen, Fernández-Villaverde and Rubio-Ramírez (2018) and Mutschler (2018). Successful estimation heavily relies on the accuracy and efficiency of the perturbation approximation, so it is advised to tune this as much as possible (see :ref:`stoch-sol-simul`). The method of moments estimator is consistent and asymptotically normally distributed given certain regularity conditions (see Duffie and Singleton (1993) for SMM and Hansen (1982) for GMM). For instance, it is required to have at least as many moment conditions as estimated parameters (over-identified or just identified). Moreover, the Jacobian of the moments with respect to the estimated parameters needs to have full rank. :ref:`identification-analysis` helps to check this regularity condition.
In the over-identified case of declaring more moment conditions than estimated parameters, the choice of :opt:`weighting_matrix <weighting_matrix = ['WM1','WM2',...,'WMn']>` matters for the efficiency of the estimation, because the estimated orthogonality conditions are random variables with unequal variances and usually non-zero cross-moment covariances. A weighting matrix allows to re-weight moments to put more emphasis on moment conditions that are more informative or better measured (in the sense of having a smaller variance). To achieve asymptotic efficiency, the weighting matrix needs to be chosen such that, after appropriate scaling, it has a probability limit proportional to the inverse of the covariance matrix of the limiting distribution of the vector of orthogonality conditions. Dynare uses a Newey-West-type estimator with a Bartlett kernel to compute an estimate of this so-called optimal weighting matrix. Note that in this over-identified case, it is advised to perform the estimation in at least two stages by setting e.g. :opt:`weighting_matrix=['DIAGONAL','DIAGONAL'] <weighting_matrix = ['WM1','WM2',...,'WMn']>` so that the computation of the optimal weighting matrix benefits from the consistent estimation of the previous stages. The optimal weighting matrix is used to compute standard errors and the J-test of overidentifying restrictions, which tests whether the model and selection of moment conditions fits the data sufficiently well. If the null hypothesis of a "valid" model is rejected, then something is (most likely) wrong with either your model or selection of orthogonality conditions.
In case the (presumed) global minimum of the moment distance function is located in a region of the parameter space that is typically considered unlikely (dilemma of absurd parameters), you may opt to choose the :opt:`penalized_estimator <penalized_estimator>` option. Similar to adding priors to the likelihood, this option incorporates prior knowledge (i.e. the prior mean) as additional moment restrictions and weights them by their prior precision to guide the minimization algorithm to more plausible regions of the parameter space. Ideally, these regions are characterized by only slightly worse values of the objective function. Note that adding prior information comes at the cost of a loss in efficiency of the estimator.
Model Comparison
Shock Decomposition
Calibrated Smoother
Dynare can also run the smoother on a calibrated model:
Forecasting
On a calibrated model, forecasting is done using the forecast
command. On an estimated model, use the forecast
option of
estimation
command.
It is also possible to compute forecasts on a calibrated or estimated model for a given constrained path of the future endogenous variables. This is done, from the reduced form representation of the DSGE model, by finding the structural shocks that are needed to match the restricted paths. Use :comm:`conditional_forecast`, :bck:`conditional_forecast_paths` and :comm:`plot_conditional_forecast` for that purpose.
Finally, it is possible to do forecasting with a Bayesian VAR using the :comm:`bvar_forecast` command.
If the model contains strong non-linearities or if some perfectly
expected shocks are considered, the forecasts and the conditional
forecasts can be computed using an extended path method. The forecast
scenario describing the shocks and/or the constrained paths on some
endogenous variables should be build. The first step is the forecast
scenario initialization using the function init_plan
:
The forecast scenario can contain some simple shocks on the exogenous
variables. This shocks are described using the function
basic_plan
:
The forecast scenario can also contain a constrained path on an
endogenous variable. The values of the related exogenous variable
compatible with the constrained path are in this case computed. In
other words, a conditional forecast is performed. This kind of shock
is described with the function flip_plan
:
Once the forecast scenario if fully described, the forecast is
computed with the command det_cond_forecast
:
Example
% conditional forecast using extended path method % with perfect foresight on r path var y r; varexo e u; ... smoothed = dseries('smoothed_variables.csv'); fplan = init_plan(2013Q4:2029Q4); fplan = flip_plan(fplan, 'y', 'u', 'surprise', 2013Q4:2014Q4, [1 1.1 1.2 1.1 ]); fplan = flip_plan(fplan, 'r', 'e', 'perfect_foresight', 2013Q4:2014Q4, [2 1.9 1.9 1.9 ]); dset_forecast = det_cond_forecast(fplan, smoothed); plot(dset_forecast.{'y','u'}); plot(dset_forecast.{'r','e'});
Optimal policy
Dynare has tools to compute optimal policies for various types of
objectives. You can either solve for optimal policy under
commitment with ramsey_model
, for optimal policy under discretion
with discretionary_policy
or for optimal simple rules with osr
(also implying commitment).
Optimal policy under commitment (Ramsey)
Dynare allows to automatically compute optimal policy choices of a Ramsey planner
who takes the specified private sector equilibrium conditions into account and commits
to future policy choices. Doing so requires specifying the private sector equilibrium
conditions in the model
-block and a planner_objective
as well as potentially some
instruments
to facilitate computations.
Warning
Be careful when employing forward-looking auxiliary variables in the context
of timeless perspective Ramsey computations. They may alter the problem the Ramsey
planner will solve for the first period, although they seemingly leave the private
sector equilibrium unaffected. The reason is the planner optimizes with respect to variables
dated t
and takes the value of time 0 variables as given, because they are predetermined.
This set of initially predetermined variables will change with forward-looking definitions.
Thus, users are strongly advised to use model-local variables instead.
Example
Consider a perfect foresight example where the Euler equation for the return to capital is given by
1/C=beta*1/C(+1)*(R(+1)+(1-delta))
The job of the Ramsey planner in period
1
is to choose C_1 and R_1, taking as given C_0. The above equation may seemingly equivalently be written as1/C=beta*1/C(+1)*(R_cap); R_cap=R(+1)+(1-delta);
due to perfect foresight. However, this changes the problem of the Ramsey planner in the first period to choosing C_1 and R_1, taking as given both C_0 and R^{cap}_0. Thus, the relevant return to capital in the Euler equation of the first period is not a choice of the planner anymore due to the forward-looking nature of the definition in the second line!
A correct specification would be to instead define
R_cap
as a model-local variable:1/C=beta*1/C(+1)*(R_cap); #R_cap=R(+1)+(1-delta);
Optimal policy under discretion
Optimal Simple Rules (OSR)
Example
var y inflation r; varexo y_ inf_; parameters delta sigma alpha kappa gammarr gammax0 gammac0 gamma_y_ gamma_inf_; delta = 0.44; kappa = 0.18; alpha = 0.48; sigma = -0.06; gammarr = 0; gammax0 = 0.2; gammac0 = 1.5; gamma_y_ = 8; gamma_inf_ = 3; model(linear); y = delta * y(-1) + (1-delta)*y(+1)+sigma *(r - inflation(+1)) + y_; inflation = alpha * inflation(-1) + (1-alpha) * inflation(+1) + kappa*y + inf_; r = gammax0*y(-1)+gammac0*inflation(-1)+gamma_y_*y_+gamma_inf_*inf_; end; shocks; var y_; stderr 0.63; var inf_; stderr 0.4; end; optim_weights; inflation 1; y 1; y, inflation 0.5; end; osr_params gammax0 gammac0 gamma_y_ gamma_inf_; osr y;
Sensitivity and identification analysis
Dynare provides an interface to the global sensitivity analysis (GSA) toolbox (developed by the Joint Research Center (JRC) of the European Commission), which is now part of the official Dynare distribution. The GSA toolbox can be used to answer the following questions:
- What is the domain of structural coefficients assuring the stability and determinacy of a DSGE model?
- Which parameters mostly drive the fit of, e.g., GDP and which the fit of inflation? Is there any conflict between the optimal fit of one observed series versus another?
- How to represent in a direct, albeit approximated, form the relationship between structural parameters and the reduced form of a rational expectations model?
The discussion of the methodologies and their application is described in Ratto (2008).
With respect to the previous version of the toolbox, in order to work properly, the GSA toolbox no longer requires that the Dynare estimation environment is set up.
Performing sensitivity analysis
IRF/Moment calibration
The irf_calibration
and moment_calibration
blocks allow
imposing implicit “endogenous” priors about IRFs and moments on the
model. The way it works internally is that any parameter draw that is
inconsistent with the “calibration” provided in these blocks is
discarded, i.e. assigned a prior density of 0
. In the context of
dynare_sensitivity
, these restrictions allow tracing out which
parameters are driving the model to satisfy or violate the given
restrictions.
IRF and moment calibration can be defined in irf_calibration
and
moment_calibration
blocks:
Performing identification analysis
General Options
Numerical Options
Identification Strength Options
Moments Options
Spectrum Options
Minimal State Space System Options
Misc Options
Debug Options
Types of analysis and output files
The sensitivity analysis toolbox includes several types of
analyses. Sensitivity analysis results are saved locally in
<mod_file>/gsa
, where <mod_file>.mod
is the name of the Dynare
model file.
Sampling
The following binary files are produced:
<mod_file>_prior.mat
: this file stores information about the analyses performed sampling from the prior, i.e.pprior=1
andppost=0
;<mod_file>_mc.mat
: this file stores information about the analyses performed sampling from multivariate normal, i.e.pprior=0
andppost=0
;<mod_file>_post.mat
: this file stores information about analyses performed using the Metropolis posterior sample, i.e.ppost=1
.
Stability Mapping
Figure files produced are of the form <mod_file>_prior_*.fig
and
store results for stability mapping from prior Monte-Carlo samples:
<mod_file>_prior_stable.fig
: plots of the Smirnov test and the correlation analyses confronting the cdf of the sample fulfilling Blanchard-Kahn conditions (blue color) with the cdf of the rest of the sample (red color), i.e. either instability or indeterminacy or the solution could not be found (e.g. the steady state solution could not be found by the solver);<mod_file>_prior_indeterm.fig
: plots of the Smirnov test and the correlation analyses confronting the cdf of the sample producing indeterminacy (red color) with the cdf of the rest of the sample (blue color);<mod_file>_prior_unstable.fig
: plots of the Smirnov test and the correlation analyses confronting the cdf of the sample producing explosive roots (red color) with the cdf of the rest of the sample (blue color);<mod_file>_prior_wrong.fig
: plots of the Smirnov test and the correlation analyses confronting the cdf of the sample where the solution could not be found (e.g. the steady state solution could not be found by the solver - red color) with the cdf of the rest of the sample (blue color);<mod_file>_prior_calib.fig
: plots of the Smirnov test and the correlation analyses splitting the sample fulfilling Blanchard-Kahn conditions, by confronting the cdf of the sample where IRF/moment restrictions are matched (blue color) with the cdf where IRF/moment restrictions are NOT matched (red color);
Similar conventions apply for <mod_file>_mc_*.fig
files, obtained
when samples from multivariate normal are used.
IRF/Moment restrictions
The following binary files are produced:
<mod_file>_prior_restrictions.mat
: this file stores information about the IRF/moment restriction analysis performed sampling from the prior ranges, i.e.pprior=1
andppost=0
;<mod_file>_mc_restrictions.mat
: this file stores information about the IRF/moment restriction analysis performed sampling from multivariate normal, i.e.pprior=0
andppost=0
;<mod_file>_post_restrictions.mat
: this file stores information about IRF/moment restriction analysis performed using the Metropolis posterior sample, i.e.ppost=1
.
Figure files produced are of the form
<mod_file>_prior_irf_calib_*.fig
and
<mod_file>_prior_moment_calib_*.fig
and store results for mapping
restrictions from prior Monte-Carlo samples:
<mod_file>_prior_irf_calib_<ENDO_NAME>_vs_<EXO_NAME>_<PERIOD>.fig
: plots of the Smirnov test and the correlation analyses splitting the sample fulfilling Blanchard-Kahn conditions, by confronting the cdf of the sample where the individual IRF restriction<ENDO_NAME>
vs.<EXO_NAME>
at period(s)<PERIOD>
is matched (blue color) with the cdf where the IRF restriction is NOT matched (red color)<mod_file>_prior_irf_calib_<ENDO_NAME>_vs_<EXO_NAME>_ALL.fig
: plots of the Smirnov test and the correlation analyses splitting the sample fulfilling Blanchard-Kahn conditions, by confronting the cdf of the sample where ALL the individual IRF restrictions for the same couple<ENDO_NAME>
vs.<EXO_NAME>
are matched (blue color) with the cdf where the IRF restriction is NOT matched (red color)<mod_file>_prior_irf_restrictions.fig
: plots visual information on the IRF restrictions compared to the actual Monte Carlo realization from prior sample.<mod_file>_prior_moment_calib_<ENDO_NAME1>_vs_<ENDO_NAME2>_<LAG>.fig
: plots of the Smirnov test and the correlation analyses splitting the sample fulfilling Blanchard-Kahn conditions, by confronting the cdf of the sample where the individual acf/ccf moment restriction<ENDO_NAME1>
vs.<ENDO_NAME2>
at lag(s)<LAG>
is matched (blue color) with the cdf where the IRF restriction is NOT matched (red color)<mod_file>_prior_moment_calib_<ENDO_NAME>_vs_<EXO_NAME>_ALL.fig
: plots of the Smirnov test and the correlation analyses splitting the sample fulfilling Blanchard-Kahn conditions, by confronting the cdf of the sample where ALL the individual acf/ccf moment restrictions for the same couple<ENDO_NAME1>
vs.<ENDO_NAME2>
are matched (blue color) with the cdf where the IRF restriction is NOT matched (red color)<mod_file>_prior_moment_restrictions.fig
: plots visual information on the moment restrictions compared to the actual Monte Carlo realization from prior sample.
Similar conventions apply for <mod_file>_mc_*.fig
and
<mod_file>_post_*.fig
files, obtained when samples from
multivariate normal or from posterior are used.
Reduced Form Mapping
When the option threshold_redform
is not set, or it is empty (the
default), this analysis estimates a multivariate smoothing spline
ANOVA model (the ’mapping’) for the selected entries in the transition
matrix of the shock matrix of the reduce form first order solution of
the model. This mapping is done either with prior samples or with MC
samples with neighborhood_width
. Unless neighborhood_width
is
set with MC samples, the mapping of the reduced form solution forces
the use of samples from prior ranges or prior distributions, i.e.:
pprior=1
and ppost=0
. It uses 250 samples to optimize
smoothing parameters and 1000 samples to compute the fit. The rest of
the sample is used for out-of-sample validation. One can also load a
previously estimated mapping with a new Monte-Carlo sample, to look at
the forecast for the new Monte-Carlo sample.
The following synthetic figures are produced:
<mod_file>_redform_<endo name>_vs_lags_*.fig
: shows bar charts of the sensitivity indices for the ten most important parameters driving the reduced form coefficients of the selected endogenous variables (namendo
) versus lagged endogenous variables (namlagendo
); suffixlog
indicates the results for log-transformed entries;<mod_file>_redform_<endo name>_vs_shocks_*.fig
: shows bar charts of the sensitivity indices for the ten most important parameters driving the reduced form coefficients of the selected endogenous variables (namendo
) versus exogenous variables (namexo
); suffixlog
indicates the results for log-transformed entries;<mod_file>_redform_gsa(_log).fig
: shows bar chart of all sensitivity indices for each parameter: this allows one to notice parameters that have a minor effect for any of the reduced form coefficients.
Detailed results of the analyses are shown in the subfolder
<mod_file>/gsa/redform_prior
for prior samples and in
<mod_file>/gsa/redform_mc
for MC samples with option
neighborhood_width
, where the detailed results of the estimation
of the single functional relationships between parameters
\theta and reduced form coefficient (denoted as y
hereafter) are stored in separate directories named as:
<namendo>_vs_<namlagendo>
, for the entries of the transition matrix;<namendo>_vs_<namexo>
, for entries of the matrix of the shocks.
The following files are stored in each directory (we stick with prior sample but similar conventions are used for MC samples):
<mod_file>_prior_<namendo>_vs_<namexo>.fig
: histogram and CDF plot of the MC sample of the individual entry of the shock matrix, in sample and out of sample fit of the ANOVA model;<mod_file>_prior_<namendo>_vs_<namexo>_map_SE.fig
: for entries of the shock matrix it shows graphs of the estimated first order ANOVA terms y = f(\theta_i) for each deep parameter \theta_i;<mod_file>_prior_<namendo>_vs_<namlagendo>.fig
: histogram and CDF plot of the MC sample of the individual entry of the transition matrix, in sample and out of sample fit of the ANOVA model;<mod_file>_prior_<namendo>_vs_<namlagendo>_map_SE.fig
: for entries of the transition matrix it shows graphs of the estimated first order ANOVA terms y = f(\theta_i) for each deep parameter \theta_i;<mod_file>_prior_<namendo>_vs_<namexo>_map.mat
,<mod_file>_<namendo>_vs_<namlagendo>_map.mat
: these files store info in the estimation;
When option logtrans_redform
is set, the ANOVA estimation is
performed using a log-transformation of each y. The ANOVA mapping is
then transformed back onto the original scale, to allow comparability
with the baseline estimation. Graphs for this log-transformed case,
are stored in the same folder in files denoted with the _log
suffix.
When the option threshold_redform
is set, the analysis is
performed via Monte Carlo filtering, by displaying parameters that
drive the individual entry y
inside the range specified in
threshold_redform
. If no entry is found (or all entries are in the
range), the MCF algorithm ignores the range specified in
threshold_redform
and performs the analysis splitting the MC
sample of y
into deciles. Setting threshold_redform=[-inf inf]
triggers this approach for all y
’s.
Results are stored in subdirectories of <mod_file>/gsa/redform_prior
named
<mod_file>_prior_<namendo>_vs_<namlagendo>_threshold
, for the entries of the transition matrix;<mod_file>_prior_<namendo>_vs_<namexo>_threshold
, for entries of the matrix of the shocks.
The files saved are named:
<mod_file>_prior_<namendo>_vs_<namexo>_threshold.fig
,<mod_file>_<namendo>_vs_<namlagendo>_threshold.fig
: graphical outputs;<mod_file>_prior_<namendo>_vs_<namexo>_threshold.mat
,<mod_file>_<namendo>_vs_<namlagendo>_threshold.mat
: info on the analysis;
RMSE
The RMSE analysis can be performed with different types of sampling options:
- When
pprior=1
andppost=0
, the toolbox analyzes the RMSEs for the Monte-Carlo sample obtained by sampling parameters from their prior distributions (or prior ranges): this analysis provides some hints about what parameter drives the fit of which observed series, prior to the full estimation;- When
pprior=0
andppost=0
, the toolbox analyzes the RMSEs for a multivariate normal Monte-Carlo sample, with covariance matrix based on the inverse Hessian at the optimum: this analysis is useful when maximum likelihood estimation is done (i.e. no Bayesian estimation);- When
ppost=1
the toolbox analyzes the RMSEs for the posterior sample obtained by Dynare’s Metropolis procedure.
The use of cases 2 and 3 requires an estimation step beforehand. To
facilitate the sensitivity analysis after estimation, the
dynare_sensitivity
command also allows you to indicate some
options of the estimation command
. These are:
datafile
nobs
first_obs
prefilter
presample
nograph
nodisplay
graph_format
conf_sig
loglinear
mode_file
Binary files produced my RMSE analysis are:
<mod_file>_prior_*.mat
: these files store the filtered and smoothed variables for the prior Monte-Carlo sample, generated when doing RMSE analysis (pprior=1
andppost=0
);<mode_file>_mc_*.mat
: these files store the filtered and smoothed variables for the multivariate normal Monte-Carlo sample, generated when doing RMSE analysis (pprior=0
andppost=0
).
Figure files <mod_file>_rmse_*.fig store results for the RMSE analysis.
<mod_file>_rmse_prior*.fig
: save results for the analysis using prior Monte-Carlo samples;<mod_file>_rmse_mc*.fig
: save results for the analysis using multivariate normal Monte-Carlo samples;<mod_file>_rmse_post*.fig
: save results for the analysis using Metropolis posterior samples.
The following types of figures are saved (we show prior sample to fix ideas, but the same conventions are used for multivariate normal and posterior):
<mod_file>_rmse_prior_params_*.fig
: for each parameter, plots the cdfs corresponding to the best 10% RMSEs of each observed series (only those cdfs below the significance thresholdalpha_rmse
);<mod_file>_rmse_prior_<var_obs>_*.fig
: if a parameter significantly affects the fit ofvar_obs
, all possible trade-off’s with other observables for same parameter are plotted;<mod_file>_rmse_prior_<var_obs>_map.fig
: plots the MCF analysis of parameters significantly driving the fit the observed seriesvar_obs
;<mod_file>_rmse_prior_lnlik*.fig
: for each observed series, plots in BLUE the cdf of the log-likelihood corresponding to the best 10% RMSEs, in RED the cdf of the rest of the sample and in BLACK the cdf of the full sample; this allows one to see the presence of some idiosyncratic behavior;<mod_file>_rmse_prior_lnpost*.fig
: for each observed series, plots in BLUE the cdf of the log-posterior corresponding to the best 10% RMSEs, in RED the cdf of the rest of the sample and in BLACK the cdf of the full sample; this allows one to see idiosyncratic behavior;<mod_file>_rmse_prior_lnprior*.fig
: for each observed series, plots in BLUE the cdf of the log-prior corresponding to the best 10% RMSEs, in RED the cdf of the rest of the sample and in BLACK the cdf of the full sample; this allows one to see idiosyncratic behavior;<mod_file>_rmse_prior_lik.fig
: whenlik_only=1
, this shows the MCF tests for the filtering of the best 10% log-likelihood values;<mod_file>_rmse_prior_post.fig
: whenlik_only=1
, this shows the MCF tests for the filtering of the best 10% log-posterior values.
Screening Analysis
Screening analysis does not require any additional options with respect to those listed in :ref:`Sampling Options <sampl-opt>`. The toolbox performs all the analyses required and displays results.
The results of the screening analysis with Morris sampling design are
stored in the subfolder <mod_file>/gsa/screen
. The data file
<mod_file>_prior
stores all the information of the analysis
(Morris sample, reduced form coefficients, etc.).
Screening analysis merely concerns reduced form coefficients. Similar synthetic bar charts as for the reduced form analysis with Monte-Carlo samples are saved:
<mod_file>_redform_<endo name>_vs_lags_*.fig
: shows bar charts of the elementary effect tests for the ten most important parameters driving the reduced form coefficients of the selected endogenous variables (namendo
) versus lagged endogenous variables (namlagendo
);<mod_file>_redform_<endo name>_vs_shocks_*.fig
: shows bar charts of the elementary effect tests for the ten most important parameters driving the reduced form coefficients of the selected endogenous variables (namendo
) versus exogenous variables (namexo
);<mod_file>_redform_screen.fig
: shows bar chart of all elementary effect tests for each parameter: this allows one to identify parameters that have a minor effect for any of the reduced form coefficients.
Identification Analysis
Setting the option identification=1
, an identification analysis
based on theoretical moments is performed. Sensitivity plots are
provided that allow to infer which parameters are most likely to be
less identifiable.
Prerequisite for properly running all the identification routines, is
the keyword identification
; in the Dynare model file. This keyword
triggers the computation of analytic derivatives of the model with
respect to estimated parameters and shocks. This is required for
option morris=2
, which implements Iskrev (2010) identification
analysis.
For example, the placing:
identification;
dynare_sensitivity(identification=1, morris=2);
in the Dynare model file triggers identification analysis using analytic derivatives as in Iskrev (2010), jointly with the mapping of the acceptable region.
The identification analysis with derivatives can also be triggered by the single command:
identification;
This does not do the mapping of acceptable regions for the model and
uses the standard random sampler of Dynare. Additionally, using only
identification;
adds two additional identification checks: namely,
of Qu and Tkachenko (2012) based on the spectral density and of
Komunjer and Ng (2011) based on the minimal state space system.
It completely offsets any use of the sensitivity analysis toolbox.
Markov-switching SBVAR
Given a list of variables, observed variables and a data file, Dynare can be used to solve a Markov-switching SBVAR model according to Sims, Waggoner and Zha (2008). [11] Having done this, you can create forecasts and compute the marginal data density, regime probabilities, IRFs, and variance decomposition of the model.
The commands have been modularized, allowing for multiple calls to the
same command within a <mod_file>.mod
file. The default is to use
<mod_file>
to tag the input (output) files used (produced) by the
program. Thus, to call any command more than once within a
<mod_file>.mod
file, you must use the *_tag
options described
below.
Epilogue Variables
The epilogue block is useful for computing output variables of interest that may not be necessarily defined in the model (e.g. various kinds of real/nominal shares or relative prices, or annualized variables out of a quarterly model).
It can also provide several advantages in terms of computational efficiency and flexibility:
- You can calculate variables in the epilogue block after smoothers/simulations have already been run without adding the new definitions and equations and rerunning smoothers/simulations. Even posterior smoother subdraws can be recycled for computing epilogue variables without rerunning subdraws with the new definitions and equations.
- You can also reduce the state space dimension in data filtering/smoothing. Assume, for example, you want annualized variables as outputs. If you define an annual growth rate in a quarterly model, you need lags up to order 7 of the associated quarterly variable; in a medium/large scale model this would just blow up the state dimension and increase by a huge amount the computing time of a smoother.
The epilogue
block is terminated by end;
and contains lines of the
form:
NAME = EXPRESSION;
- Example
-
epilogue; // annualized level of y ya = exp(y)+exp(y(-1))+exp(y(-2))+exp(y(-3)); // annualized growth rate of y gya = ya/ya(-4)-1; end;
Displaying and saving results
Dynare has comments to plot the results of a simulation and to save the results.
Macro processing language
It is possible to use “macro” commands in the .mod
file for performing
tasks such as: including modular source files, replicating blocks of equations
through loops, conditionally executing some code, writing indexed sums or
products inside equations...
The Dynare macro-language provides a new set of macro-commands which can be
used in .mod
files. It features:
- File inclusion
- Loops (
for
structure)- Conditional inclusion (
if/then/else
structures)- Expression substitution
This macro-language is totally independent of the basic
Dynare language, and is processed by a separate component of the
Dynare pre-processor. The macro processor transforms a .mod
file
with macros into a .mod
file without macros (doing
expansions/inclusions), and then feeds it to the Dynare parser. The
key point to understand is that the macro processor only does text
substitution (like the C preprocessor or the PHP language). Note that
it is possible to see the output of the macro processor by using the
savemacro
option of the dynare
command (see :ref:`dyn-invoc`).
The macro processor is invoked by placing macro directives in the .mod
file. Directives begin with an at-sign followed by a pound sign (@#
). They
produce no output, but give instructions to the macro processor. In most cases,
directives occupy exactly one line of text. If needed, two backslashes (\\
)
at the end of the line indicate that the directive is continued on the next
line. Macro directives following //
are not interpreted by the macro
processor. For historical reasons, directives in commented blocks, ie
surrounded by /*
and */
, are interpreted by the macro processor. The
user should not rely on this behavior. The main directives are:
@#includepath
, paths to search for files that are to be included,@#include
, for file inclusion,@#define
, for defining a macro processor variable,@#if, @#ifdef, @#ifndef, @#elseif, @#else, @#endif
for conditional statements,@#for, @#endfor
for constructing loops.
The macro processor maintains its own list of variables (distinct from model
variables and MATLAB/Octave variables). These macro-variables are assigned
using the @#define
directive and can be of the following basic types:
boolean, real, string, tuple, function, and array (of any of the previous
types).
Macro expressions
Macro-expressions can be used in two places:
- Inside macro directives, directly;
- In the body of the
.mod
file, between an at-sign and curly braces (like@{expr}
): the macro processor will substitute the expression with its value
It is possible to construct macro-expressions that can be assigned to macro-variables or used within a macro-directive. The expressions are constructed using literals of the basic types (boolean, real, string, tuple, array), comprehensions, macro-variables, macro-functions, and standard operators.
Note
Elsewhere in the manual, MACRO_EXPRESSION designates an expression constructed as explained in this section.
Boolean
The following operators can be used on booleans:
- Comparison operators:
==, !=
- Logical operators:
&&, ||, !
Real
The following operators can be used on reals:
- Arithmetic operators:
+, -, *, /, ^
- Comparison operators:
<, >, <=, >=, ==, !=
- Logical operators:
&&, ||, !
- Ranges with an increment of
1
:REAL1:REAL2
(for example,1:4
is equivalent to real array[1, 2, 3, 4]
).- Ranges with user-defined increment:
REAL1:REAL2:REAL3
(for example,6:-2.1:-1
is equivalent to real array[6, 3.9, 1.8, -0.3]
).- Functions:
max, min, mod, exp, log, log10, sin, cos, tan, asin, acos, atan, sqrt, cbrt, sign, floor, ceil, trunc, erf, erfc, gamma, lgamma, round, normpdf, normcdf
. NBln
can be used instead oflog
String
String literals have to be enclosed by double quotes (like "name"
).
The following operators can be used on strings:
- Comparison operators:
<, >, <=, >=, ==, !=
- Concatenation of two strings:
+
- Extraction of substrings: if
s
is a string, thens[3]
is a string containing only the third character ofs
, ands[4:6]
contains the characters from 4th to 6th- Function:
length
Tuple
Tuples are enclosed by parenthesis and elements separated by commas (like
(a,b,c)
or (1,2,3)
).
The following operators can be used on tuples:
- Comparison operators:
==, !=
- Functions:
empty, length
Array
Arrays are enclosed by brackets, and their elements are separated
by commas (like [1,[2,3],4]
or ["US", "FR"]
).
The following operators can be used on arrays:
- Comparison operators:
==, !=
- Dereferencing: if
v
is an array, thenv[2]
is its 2nd element- Concatenation of two arrays:
+
- Set union of two arrays:
|
- Set intersection of two arrays:
&
- Difference
-
: returns the first operand from which the elements of the second operand have been removed.- Cartesian product of two arrays:
*
- Cartesian product of one array N times:
^N
- Extraction of sub-arrays: e.g.
v[4:6]
- Testing membership of an array:
in
operator (for example:"b"
in["a", "b", "c"]
returns1
)- Functions:
empty, sum, length
Comprehension
Comprehension syntax is a shorthand way to make arrays from other arrays. There are three different ways the comprehension syntax can be employed: filtering, mapping, and filtering and mapping.
Filtering
Filtering allows one to choose those elements from an array for which a certain condition hold.
Example
Create a new array, choosing the even numbers from the array
1:5
:[ i in 1:5 when mod(i,2) == 0 ]
would result in:
[2, 4]
Mapping
Mapping allows you to apply a transformation to every element of an array.
Example
Create a new array, squaring all elements of the array
1:5
:[ i^2 for i in 1:5 ]
would result in:
[1, 4, 9, 16, 25]
Filtering and Mapping
Combining the two preceding ideas would allow one to apply a transformation to every selected element of an array.
Example
Create a new array, squaring all even elements of the array
1:5
:[ i^2 for i in 1:5 when mod(i,2) == 0]
would result in:
[4, 16]
Further Examples
[ (j, i+1) for (i,j) in (1:2)^2 ] [ (j, i+1) for (i,j) in (1:2)*(1:2) when i < j ]
would result in:
[(1, 2), (2, 2), (1, 3), (2, 3)] [(2, 2)]
Function
Functions can be defined in the macro processor using the @#define
directive (see below). A function is evaluated at the time it is invoked, not
at define time. Functions can be included in expressions and the operators that
can be combined with them depend on their return type.
Checking variable type
Given a variable name or literal, you can check the type it evaluates to using
the following functions: isboolean
, isreal
, isstring
, istuple
,
and isarray
.
Examples
Code | Output |
---|---|
isboolean(0) |
false |
isboolean(true) |
true |
isreal("str") |
false |
Casting between types
Variables and literals of one type can be cast into another type. Some type changes are straightforward (e.g. changing a real to a string) whereas others have certain requirements (e.g. to cast an array to a real it must be a one element array containing a type that can be cast to real).
Examples
Code | Output |
---|---|
(bool) -1.1 |
true |
(bool) 0 |
false |
(real) "2.2" |
2.2 |
(tuple) [3.3] |
(3.3) |
(array) 4.4 |
[4.4] |
(real) [5.5] |
5.5 |
(real) [6.6, 7.7] |
error |
(real) "8.8 in a string" |
error |
Casts can be used in expressions:
Examples
Code | Output |
---|---|
(bool) 0 && true |
false |
(real) "1" + 2 |
3 |
(string) (3 + 4) |
"7" |
(array) 5 + (array) 6 |
[5, 6] |
Macro directives
Typical usages
Modularization
The @#include
directive can be used to split .mod
files into
several modular components.
Example setup:
modeldesc.mod
Contains variable declarations, model equations, and shocks declarations.
simul.mod
Includes modeldesc.mod
, calibrates parameter,s and runs
stochastic simulations.
estim.mod
Includes modeldesc.mod
, declares priors on parameters, and runs
Bayesian estimation.
Dynare can be called on simul.mod
and estim.mod
but it makes
no sense to run it on modeldesc.mod
.
The main advantage is that you don't have to copy/paste the whole model (at the beginning) or changes to the model (during development).
Indexed sums of products
The following example shows how to construct a moving average:
@#define window = 2
var x MA_x;
...
model;
...
MA_x = @{1/(2*window+1)}*(
@#for i in -window:window
+x(@{i})
@#endfor
);
...
end;
After macro processing, this is equivalent to:
var x MA_x;
...
model;
...
MA_x = 0.2*(
+x(-2)
+x(-1)
+x(0)
+x(1)
+x(2)
);
...
end;
Multi-country models
Here is a skeleton example for a multi-country model:
@#define countries = [ "US", "EA", "AS", "JP", "RC" ]
@#define nth_co = "US"
@#for co in countries
var Y_@{co} K_@{co} L_@{co} i_@{co} E_@{co} ...;
parameters a_@{co} ...;
varexo ...;
@#endfor
model;
@#for co in countries
Y_@{co} = K_@{co}^a_@{co} * L_@{co}^(1-a_@{co});
...
@#if co != nth_co
(1+i_@{co}) = (1+i_@{nth_co}) * E_@{co}(+1) / E_@{co}; // UIP relation
@#else
E_@{co} = 1;
@#endif
@#endfor
end;
Endogeneizing parameters
When calibrating the model, it may be useful to consider a parameter as an endogenous variable (and vice-versa).
For example, suppose production is defined by a CES function:
y = \left(\alpha^{1/\xi} \ell^{1-1/\xi}+(1-\alpha)^{1/\xi}k^{1-1/\xi}\right)^{\xi/(\xi-1)}
and the labor share in GDP is defined as:
\textrm{lab\_rat} = (w \ell)/(p y)
In the model, \alpha is a (share) parameter and lab_rat
is an
endogenous variable.
It is clear that calibrating \alpha is not straightforward;
on the contrary, we have real world data for lab_rat
and it
is clear that these two variables are economically linked.
The solution is to use a method called variable flipping, which
consists in changing the way of computing the steady state. During
this computation, \alpha will be made an endogenous variable
and lab_rat
will be made a parameter. An economically relevant
value will be calibrated for lab_rat
, and the solution algorithm
will deduce the implied value for \alpha.
An implementation could consist of the following files:
modeqs.mod
This file contains variable declarations and model equations. The code for the declaration of \alpha and
lab_rat
would look like:@#if steady var alpha; parameter lab_rat; @#else parameter alpha; var lab_rat; @#endif
steady.mod
This file computes the steady state. It begins with:
@#define steady = 1 @#include "modeqs.mod"
Then it initializes parameters (including
lab_rat
, excluding \alpha), computes the steady state (using guess values for endogenous, including \alpha), then saves values of parameters and endogenous at steady state in a file, using thesave_params_and_steady_state
command.
simul.mod
This file computes the simulation. It begins with:
@#define steady = 0 @#include "modeqs.mod"
Then it loads values of parameters and endogenous at steady state from file, using the
load_params_and_steady_state
command, and computes the simulations.
MATLAB/Octave loops versus macro processor loops
Suppose you have a model with a parameter \rho and you want to run simulations for three values: \rho = 0.8, 0.9, 1. There are several ways of doing this:
With a MATLAB/Octave loop
rhos = [ 0.8, 0.9, 1]; for i = 1:length(rhos) rho = rhos(i); stoch_simul(order=1); end
Here the loop is not unrolled, MATLAB/Octave manages the iterations. This is interesting when there are a lot of iterations.
With a macro processor loop (case 1)
rhos = [ 0.8, 0.9, 1]; @#for i in 1:3 rho = rhos(@{i}); stoch_simul(order=1); @#endfor
This is very similar to the previous example, except that the loop is unrolled. The macro processor manages the loop index but not the data array (
rhos
).
With a macro processor loop (case 2)
@#for rho_val in [ 0.8, 0.9, 1] rho = @{rho_val}; stoch_simul(order=1); @#endfor
The advantage of this method is that it uses a shorter syntax, since the list of values is directly given in the loop construct. The inconvenience is that you can not reuse the macro array in MATLAB/Octave.
Verbatim inclusion
Pass everything contained within the verbatim block to the
<mod_file>.m
file.
Misc commands
Footnotes
[1] | A .mod file must have lines that end with a line feed character,
which is not commonly visible in text editors. Files created on
Windows and Unix-based systems have always conformed to this
requirement, as have files created on OS X and macOS. Files created
on old, pre-OS X Macs used carriage returns as end of line
characters. If you get a Dynare parsing error of the form ERROR:
<<mod file>>: line 1, cols 341-347: syntax error,... and there's
more than one line in your .mod file, know that it uses the
carriage return as an end of line character. To get more helpful
error messages, the carriage returns should be changed to line
feeds. |
[2] | Note that arbitrary MATLAB or Octave expressions can be put
in a .mod file, but those expressions have to be on
separate lines, generally at the end of the file for
post-processing purposes. They are not interpreted by Dynare,
and are simply passed on unmodified to MATLAB or
Octave. Those constructions are not addresses in this
section. |
[3] | In particular, for big models, the compilation step can be very time-consuming, and use of this option may be counter-productive in those cases. |
[4] | See options :ref:`conf_sig <confsig>` and :opt:`mh_conf_sig <mh_conf_sig = DOUBLE>` to change the size of the HPD interval. |
[5] | See options :ref:`conf_sig <confsig>` () and :opt:`mh_conf_sig <mh_conf_sig = DOUBLE>` to change the size of the HPD interval. |
[6] | When the shocks are correlated, it is the decomposition of orthogonalized shocks via Cholesky decomposition according to the order of declaration of shocks (see :ref:`var-decl`) |
[7] | See :opt:`forecast <forecast = INTEGER>` for more information. |
[8] | In case of Excel not being installed, https://mathworks.com/matlabcentral/fileexchange/38591-xlwrite--generate-xls-x--files-without-excel-on-mac-linux-win may be helpful. |
[9] | See option :ref:`conf_sig <confsig>` to change the size of the HPD interval. |
[10] | See option :ref:`conf_sig <confsig>` to change the size of the HPD interval. |
[11] | If you want to align the paper with the description herein, please note that A is A^0 and F is A^+. |
[12] | An example can be found at https://git.dynare.org/Dynare/dynare/blob/master/tests/ms-dsge/test_ms_dsge.mod. |