Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
Loading items

Target

Select target project
  • giovanma/dynare
  • giorgiomas/dynare
  • Vermandel/dynare
  • Dynare/dynare
  • normann/dynare
  • MichelJuillard/dynare
  • wmutschl/dynare
  • FerhatMihoubi/dynare
  • sebastien/dynare
  • lnsongxf/dynare
  • rattoma/dynare
  • CIMERS/dynare
  • FredericKarame/dynare
  • SumuduK/dynare
  • MinjeJeon/dynare
  • camilomrch/dynare
  • DoraK/dynare
  • avtishin/dynare
  • selma/dynare
  • claudio_olguin/dynare
  • jeffjiang07/dynare
  • EthanSystem/dynare
  • stepan-a/dynare
  • wjgatt/dynare
  • JohannesPfeifer/dynare
  • gboehl/dynare
  • ebenetce/dynare
  • chskcau/dynare-doc-fixes
28 results
Select Git revision
Loading items
Show changes
Showing
with 3061 additions and 482 deletions
......@@ -51,12 +51,23 @@ description, please refer to the comments inside the files themselves.
Small open economy RBC model with shocks to the growth trend,
presented in *Aguiar and Gopinath (2004)*.
``Gali_2015.mod``
Basic New Keynesian model of *Galí (2015)*, Chapter 3 showing how to
i) use "system prior"-type prior restrictions as in *Andrle and Plašil (2018)*
and ii) run prior/posterior-functions.
``NK_baseline.mod``
Baseline New Keynesian Model estimated in *Fernández-Villaverde
(2010)*. It demonstrates how to use an explicit steady state file
to update parameters and call a numerical solver.
``Occbin_example.mod``
RBC model with two occasionally binding constraints. Demonstrates
how to set up Occbin.
``Ramsey_Example.mod``
File demonstrating how to conduct optimal policy experiments in a
......
......@@ -6,13 +6,14 @@ Currently the development team of Dynare is composed of:
* Stéphane Adjemian (Le Mans Université, Gains)
* Houtan Bastani
* Michel Juillard (Banque de France)
* Sumudu Kankanamge (Toulouse School of Economics and CEPREMAP)
* Sumudu Kankanamge (Le Mans Université and CEPREMAP)
* Frédéric Karamé (Le Mans Université, Gains and CEPREMAP)
* Junior Maih (Norges Bank)
* Ferhat Mihoubi (Université Paris-Est Créteil, Érudite)
* Willi Mutschler (University of Tübingen)
* Johannes Pfeifer (Universität der Bundeswehr München)
* Johannes Pfeifer (University of the Bundeswehr Munich)
* Marco Ratto (European Commission, Joint Research Centre - JRC)
* Normann Rion (CY Cergy Paris Université and CEPREMAP)
* Sébastien Villemot (CEPREMAP)
The following people used to be members of the team:
......@@ -25,7 +26,7 @@ The following people used to be members of the team:
* Stéphane Lhuissier
* George Perendia
Copyright © 1996-2021, Dynare Team.
Copyright © 1996-2023, Dynare Team.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
......
......@@ -7,18 +7,18 @@ Installation and configuration
Software requirements
=====================
Packaged versions of Dynare are available for Windows (7, 8.1, 10), several GNU/Linux
distributions (Debian, Ubuntu, Linux Mint, Arch Linux) and macOS
10.11 or later. Dynare should work on other systems, but some compilation steps
are necessary in that case.
Packaged versions of Dynare are available for Windows (8.1, 10 and 11), several
GNU/Linux distributions (Debian, Ubuntu, Linux Mint, Arch Linux), macOS (14
Sonoma), and FreeBSD. Dynare should work on other systems, but some
compilation steps are necessary in that case.
In order to run Dynare, you need one of the following:
* MATLAB version 8.3 (R2014a) or above;
* GNU Octave version 4.4 or above, with the statistics package from
`Octave-Forge`_. Note however that the Dynare installers for Windows and
macOS require a more specific version of Octave, as indicated on the download
page.
* MATLAB, any version ranging from 8.3 (R2014a) to 23.2 (R2023b);
* GNU Octave, any version ranging from 5.2.0 to 8.3.0, with the statistics
package from `Octave-Forge`_. Note however that the Dynare installer for
Windows requires a more specific version of Octave, as indicated on the
download page.
The following optional extensions are also useful to benefit from
extra features, but are in no way required:
......@@ -43,9 +43,9 @@ worry about your own files.
On Windows
----------
Execute the automated installer called ``dynare-4.x.y-win.exe`` (where
``4.x.y`` is the version number), and follow the instructions. The
default installation directory is ``c:\dynare\4.x.y``.
Execute the automated installer called ``dynare-x.y-win.exe`` (where
``x.y`` is the version number), and follow the instructions. The
default installation directory is ``c:\dynare\x.y``.
After installation, this directory will contain several
sub-directories, among which are ``matlab``, ``mex`` and ``doc``.
......@@ -80,41 +80,66 @@ On Arch Linux, the Dynare package is not in the official repositories, but is
available in the `Arch User Repository`_. The needed sources can be
downloaded from the `package status in Arch Linux`_.
Dynare will be installed under ``/usr/lib/dynare``. Documentation will
be under ``/usr/share/doc/dynare-doc`` (only on Debian, Ubuntu and Linux Mint).
There is also a Dynare package for openSUSE, see the `package status in
openSUSE`_.
Dynare will be installed under ``/usr/lib/dynare`` (or ``/usr/lib64/dynare`` on
openSUSE). Documentation will be under ``/usr/share/doc/dynare`` (only on
Debian, Ubuntu and Linux Mint).
On macOS
--------
With MATLAB
^^^^^^^^^^^
To install Dynare for use with MATLAB, execute the automated installer called
``dynare-4.x.y.pkg`` (where *4.x.y* is the version number), and follow the
instructions. The default installation directory is
``/Applications/Dynare/4.x.y``. After installation, this directory will contain
several sub-directories, among which are ``matlab``, ``mex``, and ``doc``.
``dynare-x.y-arch.pkg`` (where *x.y* is the version number and *arch* is either arm64 for Apple Silicon or x86_64 for Intel architectures),
and follow the instructions.
This installation does not require administrative privileges.
If for some reason admin rights are requested, use *Change Install Location* and select *Install for me only*.
The default installation directory is ``/Applications/Dynare/x.y-arch``.
Installing into ``/Applications/dynare`` might fail if you have older versions of Dynare already installed in ``/Applications/Dynare``.
To fix this, modify the ownership by executing the following command in Terminal.app::
sudo chown -R "$USER":staff /Applications/Dynare
Alternatively, you can modify the installation path in the automated installed using *Customize* and *Location*.
After installation, the folder will contain several sub-directories, among which are ``matlab``, ``mex``, and ``doc``.
Several versions of Dynare can coexist (by default in ``/Applications/Dynare``),
as long as you correctly adjust your path settings (see :ref:`words-warning`).
It is recommended to install the Xcode Command Line Tools (this is an Apple product)
and GCC via Homebrew_ (see :ref:`prerequisites-macos`).
With Octave
^^^^^^^^^^^
Note that several versions of Dynare can coexist (by default in
``/Applications/Dynare``), as long as you correctly adjust your path
settings (see :ref:`words-warning`).
We don’t provide Dynare packages for macOS with Octave support, but there is a
Dynare package with Octave support in Homebrew_.
By default, the installer installs a version of GCC (for use with :opt:`use_dll`)
in the installation directory, under the ``.brew`` folder. To do so, it also
installs a version of `Homebrew <https://brew.sh>`__ in the same folder and
Xcode Command Line Tools (this is an Apple product) in a system folder.
Once Homebrew_ is installed, run a terminal and install Dynare (and Octave) by
typing the following::
All of this requires a bit of time and hard disk space. The amount of time it
takes will depend on your computing power and internet connection. To reduce
the time the Dynare installer takes, you can install Xcode Command Line Tools
yourself (see :ref:`prerequisites-macos`). Dynare, Homebrew, and GCC use
about 600 MB of disk space while the Xcode Command Line Tools require about 400
MB.
brew install dynare
If you do not use the :opt:`use_dll` option, you have the choice to forgo the
installation of GCC and hence Dynare will only take about 50 MB of disk space.
Then open Octave by running the following in the same terminal::
octave --gui
Finally, at the Octave prompt, install some add-ons (you only have to do it
once)::
octave:1> pkg install -forge io statistics control struct optim
On FreeBSD
----------
Dynare for Octave works with Octave installed via the package located here:
`https://octave-app.org <https://octave-app.org>`__.
A `FreeBSD port for Dynare <https://www.freshports.org/science/dynare/>`__ is
available. It can be installed with::
pkg install dynare
For other systems
-----------------
......@@ -154,12 +179,40 @@ install liboctave-dev``).
Prerequisites on macOS
----------------------
With MATLAB
^^^^^^^^^^^
Dynare now ships a compilation environment that can be used with the
:opt:`use_dll` option. To install this environment correctly, the Dynare
installer ensures that the Xcode Command Line Tools (an Apple product) have
been installed on a system folder. To install the Xcode Command Line Tools
yourself, simply type ``xcode-select --install`` into the Terminal
yourself, simply type ``xcode-select --install`` into the terminal
(``/Applications/Utilities/Terminal.app``) prompt.
Additionally, to make MATLAB aware that you agree to the terms of Xcode, run the following two commands in the Terminal prompt::
CLT_VERSION=$(pkgutil --pkg-info=com.apple.pkg.CLTools_Executables | grep version | awk '{print $2}' | cut -d'.' -f1-2)
defaults write com.apple.dt.Xcode IDEXcodeVersionForAgreedToGMLicense "${CLT_VERSION}"
defaults read com.apple.dt.Xcode IDEXcodeVersionForAgreedToGMLicense
Otherwise you will see a warning that Xcode is installed, but its license has not been accepted.
You can check this e.g. by running the following command in the MATLAB command window::
mex -setup
Moreover, we recommend making use of optimized compilation flags when using :opt:`use_dll` and for this you need to install GCC via Homebrew_::
brew install gcc
If you already have installed GCC, Dynare will automatically prefer it for :opt:`use_dll`
if the binaries are either in ``/opt/homebrew/bin`` on Apple Silicon (arm64) or in ``/usr/local/bin`` on Intel (x86_64) systems.
Otherwise, it will fall back to Clang in ``/usr/bin/clang``, which works both on arm64 and x86_64 systems.
With Octave
^^^^^^^^^^^
The compiler can be installed via Homebrew_. In a terminal, run::
brew install gcc
Configuration
=============
......@@ -176,20 +229,20 @@ installation to MATLAB path. You have two options for doing that:
* Using the ``addpath`` command in the MATLAB command window:
Under Windows, assuming that you have installed Dynare in the
standard location, and replacing ``4.x.y`` with the correct version
standard location, and replacing ``x.y`` with the correct version
number, type::
>> addpath c:/dynare/4.x.y/matlab
>> addpath c:/dynare/x.y/matlab
Under GNU/Linux, type::
>> addpath /usr/lib/dynare/matlab
Under macOS, assuming that you have installed Dynare in the standard
location, and replacing ``4.x.y`` with the correct version number,
location, and replacing ``x.y`` with the correct version number,
type::
>> addpath /Applications/Dynare/4.x.y/matlab
>> addpath /Applications/Dynare/x.y/matlab
MATLAB will not remember this setting next time you run it, and you
will have to do it again.
......@@ -211,20 +264,19 @@ installation to Octave path, using the ``addpath`` at the Octave
command prompt.
Under Windows, assuming that you have installed Dynare in the standard
location, and replacing “*4.x.y*” with the correct version number,
location, and replacing “*x.y*” with the correct version number,
type::
octave:1> addpath c:/dynare/4.x.y/matlab
octave:1> addpath c:/dynare/x.y/matlab
Under Debian, Ubuntu or Linux Mint, there is no need to use the ``addpath``
command; the packaging does it for you. Under Arch Linux, you need to do::
octave:1> addpath /usr/lib/dynare/matlab
Under macOS, assuming you have installed Octave via `https://octave-app.org
<https://octave-app.org>`__, type::
Under macOS, assuming you have installed Dynare via Homebrew_::
octave:1> addpath /Applications/Dynare/4.x.y/matlab
octave:1> addpath /usr/local/lib/dynare/matlab
If you don’t want to type this command every time you run Octave, you
can put it in a file called ``.octaverc`` in your home directory
......@@ -267,7 +319,9 @@ Dynare unusable.
.. _Package status in Ubuntu: https://launchpad.net/ubuntu/+source/dynare
.. _Package status in Linux Mint: https://community.linuxmint.com/software/view/dynare
.. _Package status in Arch Linux: https://aur.archlinux.org/packages/dynare/
.. _Package status in openSUSE: https://software.opensuse.org/package/dynare
.. _Arch User Repository: https://wiki.archlinux.org/index.php/Arch_User_Repository
.. _Dynare website: https://www.dynare.org/
.. _Dynare wiki: https://git.dynare.org/Dynare/dynare/wikis
.. _Octave-Forge: https://octave.sourceforge.io/
.. _Homebrew: https://brew.sh
......@@ -94,26 +94,26 @@ Citing Dynare in your research
You should cite Dynare if you use it in your research. The
recommended way todo this is to cite the present manual, as:
Stéphane Adjemian, Houtan Bastani, Michel Juillard, Frédéric
Karamé, Junior Maih, Ferhat Mihoubi, Willi Mutschler, George Perendia, Johannes Pfeifer,
Marco Ratto and Sébastien Villemot (2011), “Dynare: Reference Manual,
Version 4,” *Dynare Working Papers*, 1, CEPREMAP
Stéphane Adjemian, Houtan Bastani, Michel Juillard, Frédéric Karamé,
Ferhat Mihoubi, Willi Mutschler, Johannes Pfeifer, Marco Ratto,
Normann Rion and Sébastien Villemot (2022), “Dynare: Reference Manual,
Version 5,” *Dynare Working Papers*, 72, CEPREMAP
For convenience, you can copy and paste the following into your BibTeX file:
.. code-block:: bibtex
@TechReport{Adjemianetal2011,
@TechReport{Adjemianetal2022,
author = {Adjemian, St\'ephane and Bastani, Houtan and
Juillard, Michel and Karam\'e, Fr\'ederic and
Maih, Junior and Mihoubi, Ferhat and Mutschler, Willi
and Perendia, George and Pfeifer, Johannes and
Ratto, Marco and Villemot, S\'ebastien},
title = {Dynare: Reference Manual Version 4},
year = {2011},
Mihoubi, Ferhat and Mutschler, Willi
and Pfeifer, Johannes and Ratto, Marco and
Rion, Normann and Villemot, S\'ebastien},
title = {Dynare: Reference Manual Version 5},
year = {2022},
institution = {CEPREMAP},
type = {Dynare Working Papers},
number = {1},
number = {72},
}
If you want to give a URL, use the address of the Dynare website:
......
......@@ -204,7 +204,7 @@ by the ``dynare`` command.
.. option:: params_derivs_order=0|1|2
When :comm:`identification`, :comm:`dynare_sensitivity` (with
identification), or :ref:`estimation_cmd <estim-comm>` are
identification), or :ref:`estimation <estim-comm>` are
present, this option is used to limit the order of the
derivatives with respect to the parameters that are calculated
by the preprocessor. 0 means no derivatives, 1 means first
......@@ -216,7 +216,8 @@ by the ``dynare`` command.
.. option:: notime
Do not print the total computing time at the end of the driver.
Do not print the total computing time at the end of the driver, and do
not save that total computing time to ``oo_.time``.
.. option:: transform_unary_ops
......@@ -229,7 +230,7 @@ by the ``dynare`` command.
.. option:: json = parse|check|transform|compute
Causes the preprocessor to output a version of the ``.mod`` file in
JSON format to ``<<M_.dname>>/model/json/``.
JSON format to ``<<M_.fname>>/model/json/``.
When the JSON output is created depends on the value
passed. These values represent various steps of processing in the
preprocessor.
......@@ -314,7 +315,7 @@ by the ``dynare`` command.
Prevent Dynare from printing the output of the steps leading up to the
preprocessor as well as the preprocessor output itself.
.. option:: mexext=mex|mexw32|mexw64|mexmaci64|mexa64
.. option:: mexext=mex|mexw64|mexmaci64|mexa64
The mex extension associated with your platform to be used
when compiling output associated with :opt:`use_dll`.
......@@ -362,7 +363,8 @@ by the ``dynare`` command.
For local execution under Windows operating system,
set ``parallel_use_psexec=false`` to use ``start``
instead of ``psexec``, to properly allocate affinity when there are
more than 32 cores in the local machine. [default=true]
more than 32 cores in the local machine. This option is also helpful if
``psexec`` cannot be executed due to missing admininstrator privileges. [default=true]
.. option:: -DMACRO_VARIABLE=MACRO_EXPRESSION
......@@ -371,7 +373,9 @@ by the ``dynare`` command.
:ref:`macro-proc-lang`). See the :ref:`note on quotes<quote-note>` for
info on passing a ``MACRO_EXPRESSION`` argument containing spaces. Note
that an expression passed on the command line can reference variables
defined before it.
Strings assigned to a macro variable need to be enclosed in double
quoted strings. This also allows for passing single quotes within the
strings.
*Example*
......@@ -379,7 +383,7 @@ by the ``dynare`` command.
.. code-block:: matlab
>> dynare <<modfile.mod>> -DA=true '-DB="A string with space"' -DC=[1,2,3] '-DD=[ i in C when i > 1 ]'
>> dynare <<modfile.mod>> -DA=true '-DB="A string with space"' -DC=[1,2,3] '-DD=[ i in C when i > 1 ]' -Ddatafile_name="'my_data_file.mat'"
.. option:: -I<<path>>
......@@ -404,9 +408,11 @@ by the ``dynare`` command.
.. option:: fast
Only useful with model option :opt:`use_dll`. Don’t recompile the
MEX files when running again the same model file and the lists
of variables and the equations haven’t changed. We use a 32
Don’t rewrite the output files otherwise written to the disk by the preprocessor
when re-running the same model file while the lists of variables and the equations
haven’t changed. Note that the whole model still needs to be preprocessed. This option
is most useful with model option :opt:`use_dll`, because
the time-consuming compilation of the MEX files will be skipped. We use a 32
bit checksum, stored in ``<model filename>/checksum``. There
is a very small probability that the preprocessor misses a
change in the model. In case of doubt, re-run without the fast
......@@ -551,9 +557,9 @@ by the ``dynare`` command.
called ``FILENAME_results.mat`` located in the ``MODFILENAME/Output`` folder.
If they exist, ``estim_params_``,
``bayestopt_``, ``dataset_``, ``oo_recursive_`` and
``estimation_info`` are saved in the same file. Note that Matlab
by default only allows ``.mat``-files up to 2GB. You can lift this
restriction by enabling the ``save -v7.3``-option in
``estimation_info`` are saved in the same file. Note that MATLAB
by default only allows ``.mat`` files up to 2GB. You can lift this
restriction by enabling the ``save -v7.3`` option in
``Preferences -> General -> MAT-Files``.
.. matvar:: M_
......@@ -582,6 +588,11 @@ by the ``dynare`` command.
saved in the `i` -th field. The fields for non-estimated
endpoints are empty.
.. matvar:: oo_.time
Total computing time of the Dynare run, in seconds. This field is not
set if the :opt:`notime` option has been used.
*Example*
Call dynare from the MATLAB or Octave prompt, without or with options:
......@@ -663,7 +674,7 @@ parser would continue processing.
It is also helpful to keep in mind that any piece of code that does not violate
Dynare syntax, but at the same time is not recognized by the parser, is interpreted
as native MATLAB code. This code will be directly passed to the ``driver`` script.
Investigating ``driver.m`` file then helps with debugging. Such problems most often
as native MATLAB code. This code will be directly passed to the driver script.
Investigating the ``driver.m`` file then helps with debugging. Such problems most often
occur when defined variable or parameter names have been misspelled so that Dynare's
parser is unable to recognize them.
......@@ -54,7 +54,7 @@ conventions such as ``USER_NAME`` have been excluded for concision):
``PATH_AND_FILE``
Indicates a valid path to a file in the underlying operating
system (e.g. ``/usr/local/MATLAB/R2010b/bin/matlab``).
system (e.g. ``/usr/local/MATLAB/R2023b/bin/matlab``).
``BOOLEAN``
......@@ -161,6 +161,13 @@ For a UNIX grid:
the master to the slaves can be done without passwords, or
using an SSH agent.
.. warning:: Compatibility considerations between master and slave
It is highly recommended to use the same version of Dynare on both the
master and all slaves. Different versions regularly cause problems like
zero acceptance rates during estimation. When upgrading to a newer Dynare
version do not forget to adjust the ``DynarePath``.
We now turn to the description of the configuration directives. Note
that comments in the configuration file can be provided by separate
lines starting with a hashtag (#).
......@@ -263,17 +270,41 @@ lines starting with a hashtag (#).
.. option:: MatlabOctavePath = PATH_AND_FILE
The path to the MATLAB or Octave executable. The default value
is ``matlab``.
is ``matlab`` as MATLAB’s executable is typically in the %PATH% environment
variable. When using full paths on Windows, you may need to enclose the path
in quoted strings, e.g. ``MatlabOctavePath="C:\Program Files\MATLAB\R2023b\bin\matlab.exe"``
.. option:: NumberOfThreadsPerJob = INTEGER
For Windows nodes, sets the number of threads assigned to each
remote MATLAB/Octave run. The default value is 1.
This option controls the distribution of jobs (e.g. MCMC chains) across additional MATLAB instances that are run in parallel.
Needs to be an exact divisor of the number of cores.
The formula :opt:`CPUnbr <CPUnbr = INTEGER | [INTEGER:INTEGER]>` divided by :opt:`NumberOfThreadsPerJob <NumberOfThreadsPerJob = INTEGER>`
calculates the number of MATLAB/Octave instances that will be launched in parallel,
where each instance will then execute a certain number of jobs sequentially.
For example, if you run a MCMC estimation with 24 chains on a 12 core machine, setting ``CPUnbr = 12`` and ``NumberOfThreadsPerJob = 4``
will launch 3 MATLAB instances in parallel, each of which will compute 8 chains sequentially.
Note that this option does not dictate the number of maximum threads utilized by each MATLAB/Octave instance,
see related option :opt:`SingleCompThread <SingleCompThread = BOOLEAN>` for this.
Particularly for very large models, setting this option to 2 might distribute the workload in a
more efficient manner, depending on your hardware and task specifics.
It’s advisable to experiment with different values to achieve optimal performance.
The default value is ``1``.
.. option:: SingleCompThread = BOOLEAN
Whether or not to disable MATLAB’s native multithreading. The
default value is ``false``. Option meaningless under Octave.
This option allows you to enable or disable MATLAB’s native multithreading capability. When set to ``true``,
the additional MATLAB instances are initiated in single thread mode utilizing the ``-singleCompThread`` startup option,
thereby disabling MATLAB’s native multithreading. When set to ``false``, MATLAB’s native multithreading
is enabled, e.g. the actual number of threads utilized by each MATLAB instance is usually determined by the number of CPU cores
(you can check this by running ``maxNumCompThreads`` in MATLAB’s command window).
Note: While MATLAB aims to accelerate calculations by distributing them across your computer’s threads,
certain tasks, like MCMC estimations, may exhibit slowdowns with MATLAB’s multitasking especially when Dynare’s parallel computing is turned on
as we do not use MATLAB’s parallel toolbox.
So in many cases it is advisable to set this setting to ``true``.
If you want to have more control, you can manually add the MATLAB command `maxNumCompThreads(N)` at the beginning of `fParallel.m`.
The default value is ``false``. This option is ineffective under Octave.
.. option:: OperatingSystem = OPERATING_SYSTEM
......@@ -316,7 +347,9 @@ Windows Step-by-Step Guide
==========================
This section outlines the steps necessary on most Windows systems to
set up Dynare for parallel execution.
set up Dynare for parallel execution. Note that the steps 3 to 6 are
required unless parallel execution is confined to a local pool
with the ``parallel_use_psexec=false`` option.
1. Write a configuration file containing the options you want. A
mimimum working example setting up a cluster consisting of two
......@@ -384,6 +417,6 @@ set up Dynare for parallel execution.
#so we only need to provide the name of the exe file
MatlabOctavePath=matlab
#Dynare path you are using
DynarePath=C:/dynare/2016-05-10/matlab
DynarePath=C:/dynare/4.7.0/matlab
.. _PsTools: https://technet.microsoft.com/sysinternals/pstools.aspx
......@@ -112,11 +112,10 @@ parameters must not have the same name as Dynare commands or built-in
functions. In this respect, Dynare is not case-sensitive. For example,
do not use ``Ln`` or ``Sigma_e`` to name your variable. Not conforming
to this rule might yield hard-to-debug error messages or
crashes. Second, to minimize interference with MATLAB or Octave
functions that may be called by Dynare or user-defined steady state
files, it is recommended to avoid using the name of MATLAB
functions. In particular when working with steady state files, do not
use correctly-spelled greek names like `alpha`, because there are
crashes. Second, when employing user-defined steady state files it is
recommended to avoid using the name of MATLAB functions as this may cause
conflicts. In particular, when working with user-defined steady state files, do not
use correctly-spelled greek names like ``alpha``, because there are
MATLAB functions of the same name. Rather go for ``alppha`` or
``alph``. Lastly, please do not name a variable or parameter
``i``. This may interfere with the imaginary number i and the index in
......@@ -186,8 +185,8 @@ for declaring variables and parameters are described below.
 
::
 
var c gnp cva (country=`US', state=`VA')
cca (country=`US', state=`CA', long_name=`Consumption CA');
var c gnp cva (country='US', state='VA')
cca (country='US', state='CA', long_name='Consumption CA');
var(deflator=A) i b;
var c $C$ (long_name=`Consumption');
 
......@@ -399,7 +398,7 @@ for declaring variables and parameters are described below.
.. command:: trend_var (growth_factor = MODEL_EXPR) VAR_NAME [$LATEX_NAME$]...;
 
|br| This optional command declares the trend variables in the
model. See ref:`conv` for the syntax of MODEL_EXPR and
model. See :ref:`conv` for the syntax of MODEL_EXPR and
VAR_NAME. Optionally it is possible to give a
LaTeX name to the variable.
 
......@@ -458,21 +457,22 @@ On-the-fly Model Variable Declaration
 
Endogenous variables, exogenous variables, and parameters can also be declared
inside the model block. You can do this in two different ways: either via the
equation tag or directly in an equation.
equation tag (only for endogenous variables) or directly in an equation (for
endogenous, exogenous or parameters).
 
To declare a variable on-the-fly in an equation tag, simply state the type of
variable to be declared (``endogenous``, ``exogenous``, or
``parameter`` followed by an equal sign and the variable name in single
To declare an endogenous variable on-the-fly in an equation tag, simply write
``endogenous`` followed by an equal sign and the variable name in single
quotes. Hence, to declare a variable ``c`` as endogenous in an equation tag,
you can type ``[endogenous='c']``.
 
To perform on-the-fly variable declaration in an equation, simply follow the
symbol name with a vertical line (``|``, pipe character) and either an ``e``, an
``x``, or a ``p``. For example, to declare a parameter named
``alphaa`` in the model block, you could write ``alphaa|p`` directly in
an equation where it appears. Similarly, to declare an endogenous variable
``c`` in the model block you could write ``c|e``. Note that in-equation
on-the-fly variable declarations must be made on contemporaneous variables.
symbol name with a vertical line (``|``, pipe character) and either an ``e``
(for endogenous), an ``x`` (for exogenous), or a ``p`` (for parameter). For
example, to declare a parameter named ``alphaa`` in the model block, you could
write ``alphaa|p`` directly in an equation where it appears. Similarly, to
declare an endogenous variable ``c`` in the model block you could write
``c|e``. Note that in-equation on-the-fly variable declarations must be made on
contemporaneous variables.
 
On-the-fly variable declarations do not have to appear in the first place where
this variable is encountered.
......@@ -617,6 +617,19 @@ not in EXPRESSION):
Exogenous and exogenous deterministic variables may not appear in
MODEL_EXPRESSION.
 
.. warning::
The concept of a steady state is ambiguous in a perfect foresight
context with permament and potentially anticipated shocks occuring.
Dynare will use the contents of ``oo_.steady_state`` as its reference
for calls to the ``STEADY_STATE()`` operator. In the presence of
``endval``, this implies that the terminal state provided by the
user is used. This may be a steady state computed by Dynare (if ``endval``
is followed by ``steady``) or simply the terminal state provided by the
user (if ``endval`` is not followed by ``steady``). Put differently,
Dynare will not automatically compute the steady state conditional on
the specificed value of the exogenous variables in the respective periods.
.. operator:: EXPECTATION (INTEGER) (MODEL_EXPRESSION)
 
This operator is used to take the expectation of some expression
......@@ -920,7 +933,7 @@ The model is declared inside a ``model`` block:
can serve different purposes by allowing the user to attach
arbitrary informations to each equation and to recover them at
runtime. For instance, it is possible to name the equations with a
``name``-tag, using a syntax like::
``name`` tag, using a syntax like::
 
model;
 
......@@ -943,7 +956,7 @@ The model is declared inside a ``model`` block:
 
end;
 
More information on tags is available at `<https://archives.dynare.org/DynareWiki/EquationsTags>`__.
More information on tags is available at `<https://git.dynare.org/Dynare/dynare/-/wikis/Equations-Tags>`__.
 
*Options*
 
......@@ -1286,11 +1299,14 @@ the form ``MULT_i``, where *i* represents the constraint with which
the multiplier is associated (counted from the order of declaration in
the model block).
 
The last type of auxiliary variables is introduced by the
Auxiliary variables are also introduced by the
``differentiate_forward_vars`` option of the model block. The new
variables take the form ``AUX_DIFF_FWRD_i``, and are equal to
``x-x(-1)`` for some endogenous variable ``x``.
 
Finally, auxiliary variables will arise in the context of employing the
``diff`` operator.
Once created, all auxiliary variables are included in the set of
endogenous variables. The output of decision rules (see below) is such
that auxiliary variable names are replaced by the original variables
......@@ -1301,7 +1317,7 @@ variables is stored in ``M_.orig_endo_nbr``, and the number of
endogenous variables after the creation of auxiliary variables is
stored in ``M_.endo_nbr``.
 
See `<https://archives.dynare.org/DynareWiki/AuxiliaryVariables>`__ for more technical details on auxiliary variables.
See `<https://git.dynare.org/Dynare/dynare/-/wikis/Auxiliary-variables>`__ for more technical details on auxiliary variables.
 
 
.. _init-term-cond:
......@@ -1415,6 +1431,10 @@ in this case ``initval`` is used to specify the terminal conditions.
steady state computation will still be triggered by subsequent
commands (``stoch_simul``, ``estimation``...).
 
As such, ``initval`` allows specifying the initial instrument value for
steady state finding when providing an analytical conditional steady state
file for ``ramsey_model``-computations.
It is not necessary to declare 0 as initial value for exogenous
stochastic variables, since it is the only possible value.
 
......@@ -1677,7 +1697,7 @@ in this case ``initval`` is used to specify the terminal conditions.
jumps. In the example above, consumption will display a large
jump from :math:`t=0` to :math:`t=1` and capital will jump
from :math:`t=200` to :math:`t=201` when using :comm:`rplot`
or manually plotting ``oo_.endo_val``.
or manually plotting ``oo_.endo_simul``.
 
 
.. block:: histval ;
......@@ -1757,11 +1777,12 @@ in this case ``initval`` is used to specify the terminal conditions.
* In :comm:`conditional_forecast` for a calibrated model as
the initial point at which the conditional forecasts are
computed. When using the :ref:`loglinear <logl>` option, the
histval-block nevertheless takes the unlogged starting
``histval`` block nevertheless takes the unlogged starting
values.
* In :comm:`Ramsey policy <ramsey_model>`, where it also
specifies the values of the endogenous states at which the
objective function of the planner is computed. Note that the
specifies the values of the endogenous states (including
lagged exogenous) at which the objective function of the
planner is computed. Note that the
initial values of the Lagrange multipliers associated with
the planner’s problem cannot be set (see
:comm:`evaluate_planner_objective`).
......@@ -2422,6 +2443,16 @@ blocks.
arbitrary expressions are also allowed, but you have to enclose
them inside parentheses.
 
The feasible range of ``periods`` is from 0 to the number of ``periods``
specified in ``perfect_foresight_setup``.
.. warning:: Note that the first endogenous simulation period is period 1.
Thus, a shock value specified for the initial period 0 may conflict with
(i.e. may overwrite or be overwritten by) values for the
initial period specified with ``initval`` or ``endval`` (depending on
the exact context). Users should always verify the correct setting
of ``oo_.exo_simul`` after ``perfect_foresight_setup``.
*Example* (with scalar values)
 
::
......@@ -2508,6 +2539,27 @@ blocks.
var v, w = 2;
end;
 
|br| *In stochastic optimal policy context*
When computing conditional welfare in a ``ramsey_model`` or ``discretionary_policy``
context, welfare is conditional on the state values inherited by planner when making
choices in the first period. The information set of the first period includes the
respective exogenous shock realizations. Thus, their known value can be specified
using the perfect foresight syntax. Note that i) all other values specified for
periods than period 1 will be ignored and ii) the value of lagged shocks (e.g.
in the case of news shocks) is specified with ``histval``.
*Example*
::
shocks;
var u; stderr 0.008;
var u;
periods 1;
values 1;
end;
*Mixing deterministic and stochastic shocks*
 
It is possible to mix deterministic and stochastic shocks to build
......@@ -2582,7 +2634,9 @@ blocks.
scales DOUBLE | (EXPRESSION) [[,] DOUBLE | (EXPRESSION) ]...;
 
NOTE: ``scales`` and ``values`` cannot be simultaneously set for the same shock in the same period, but it is
possible to set ``values`` for some periods and ``scales`` for other periods for the same shock.
possible to set ``values`` for some periods and ``scales`` for other periods for the same shock. There can be
only one ``scales`` and ``values`` directive each for a given shock, so all affected periods must be set in one
statement.
 
*Example*
 
......@@ -2591,22 +2645,16 @@ blocks.
heteroskedastic_shocks;
 
var e1;
periods 86:87 88:97;
scales 0.5 0;
periods 86:87, 89:97;
scales 0.5, 0;
var e1;
periods 88;
values 0.1;
 
var e2;
periods 86:87 88:97;
values 0.04 0.01;
end;
var e3;
periods 86:87;
values 0.04;
end;
var e3;
periods 88:97;
scales 0;
 
end;
 
......@@ -2742,13 +2790,12 @@ Finding the steady state with Dynare nonlinear solver
 
``1``
 
Use Dynare’s own nonlinear equation solver (a
Newton-like algorithm with line-search).
Use a Newton-like algorithm with line-search.
 
``2``
 
Splits the model into recursive blocks and solves each
block in turn using the same solver as value 1.
block in turn using the same solver as value ``1``.
 
``3``
 
......@@ -2786,12 +2833,15 @@ Finding the steady state with Dynare nonlinear solver
 
``9``
 
Trust-region algorithm on the entire model.
Trust-region algorithm with autoscaling (same as value ``4``,
but applied to the entire model, without splitting).
 
``10``
 
Levenberg-Marquardt mixed complementarity problem
(LMMCP) solver (*Kanzow and Petra (2004)*).
(LMMCP) solver (*Kanzow and Petra (2004)*). The complementarity
conditions are specified with an ``mcp`` equation tag, see
:opt:`lmmcp`.
 
``11``
 
......@@ -2807,24 +2857,27 @@ Finding the steady state with Dynare nonlinear solver
 
``12``
 
Specialized version of ``2`` for models where all the
equations have one endogenous variable on the left
hand side and where each equation determines a
different endogenous variable. Only expression allowed
on the left hand side is the natural logarithm of an
endogenous variable. Univariate blocks are solved by
evaluating the expression on the right hand
side.
Specialized version of ``2`` for models where all the equations
have one endogenous variable on the left hand side and where
each equation determines a different endogenous variable. Only
expressions allowed on the left hand side are the natural
logarithm of an endogenous variable, the first difference of an
endogenous variable (with the ``diff`` operator), or the first
difference of the logarithm of an endogenous variable.
Univariate blocks are solved by evaluating the expression on the
right hand side.
 
``14``
 
Specialized version of ``4`` for models where all the
equations have one endogenous variable on the left
hand side and where each equation determines a
different endogenous variable. Only expression allowed
on the left hand side is the natural logarithm of an
endogenous variable. Univariate blocks are solved by
evaluating the expression on the right hand side.
Specialized version of ``4`` for models where all the equations
have one endogenous variable on the left hand side and where
each equation determines a different endogenous variable. Only
expressions allowed on the left hand side are the natural
logarithm of an endogenous variable, the first difference of an
endogenous variable (with the ``diff`` operator), or the first
difference of the logarithm of an endogenous variable..
Univariate blocks are solved by evaluating the expression on the
right hand side.
 
|br| Default value is ``4``.
 
......@@ -2894,8 +2947,11 @@ Finding the steady state with Dynare nonlinear solver
 
.. option:: markowitz = DOUBLE
 
Value of the Markowitz criterion, used to select the
pivot. Only used when ``solve_algo = 5``. Default: 0.5.
Value of the Markowitz criterion (in the interval :math:`(0,\infty)`) used to select the
pivot with sparse Gaussian elimination (``solve_algo = 5``). This criterion
governs the tradeoff between selecting the pivot resulting in the most
accurate solution (low ``markowitz`` values) and the one that preserves
maximum sparsity (high ``markowitz`` values). Default: 0.5.
 
*Example*
 
......@@ -2909,6 +2965,13 @@ After computation, the steady state is available in the following variable:
ordered in the order of declaration used in the ``var`` command (which
is also the order used in ``M_.endo_names``).
 
.. matvar:: oo_.exo_steady_state
Contains the steady state of the exogenous variables, as declared by the
previous ``initval`` or ``endval`` block. Exogenous variables are
ordered in the order of declaration used in the ``varexo`` command (which
is also the order used in ``M_.exo_names``).
.. matcomm:: get_mean ('ENDOGENOUS_NAME' [, 'ENDOGENOUS_NAME']... );
 
Returns the steady of state of the given endogenous variable(s), as it is
......@@ -3003,7 +3066,7 @@ using ``steady``. Again, there are two options for doing that:
generated by Dynare will be called ``+FILENAME/steadystate.m.``
 
* You can write the corresponding MATLAB function by hand. If your
MOD-file is called ``FILENAME.mod``, the steady state file must be
``.mod`` file is called ``FILENAME.mod``, the steady state file must be
called ``FILENAME_steadystate.m``. See
``NK_baseline_steadystate.m`` in the examples directory for an
example. This option gives a bit more flexibility (loops and
......@@ -3203,7 +3266,14 @@ Getting information about the model
.. command:: model_info ;
model_info (OPTIONS...);
 
|br| This command provides information about:
|br| This command provides information about the model.
When used outside the context of the ``block`` option of the ``model`` block,
it will provide a list of predetermined state variables, forward-looking variables,
and purely static variables.
When used in conjunction with the ``block`` option of the ``model`` block,
it displays:
 
* The normalization of the model: an endogenous variable is
attributed to each equation of the model;
......@@ -3211,66 +3281,64 @@ Getting information about the model
indicates its type, the equations number and endogenous
variables belonging to this block.
 
This command can only be used in conjunction with the ``block``
option of the ``model`` block.
 
There are five different types of blocks depending on the
simulation method used:
 
* EVALUATE FORWARD
* ``EVALUATE FORWARD``
 
In this case the block contains only equations where
In this case the block contains only equations where the
endogenous variable attributed to the equation appears
currently on the left hand side and where no forward looking
at current period on the left hand side and where no forward looking
endogenous variables appear. The block has the form:
:math:`y_{j,t} = f_j(y_t, y_{t-1}, \ldots, y_{t-k})`.
 
* EVALUATE BACKWARD
* ``EVALUATE BACKWARD``
 
The block contains only equations where endogenous variable
attributed to the equation appears currently on the left hand
The block contains only equations where the endogenous variable
attributed to the equation appears at current period on the left hand
side and where no backward looking endogenous variables
appear. The block has the form: :math:`y_{j,t} = f_j(y_t,
y_{t+1}, \ldots, y_{t+k})`.
 
* SOLVE BACKWARD x
* ``SOLVE BACKWARD x``
 
The block contains only equations where endogenous variable
attributed to the equation does not appear currently on the
The block contains only equations where the endogenous variable
attributed to the equation does not appear at current period on the
left hand side and where no forward looking endogenous
variables appear. The block has the form: :math:`g_j(y_{j,t},
y_t, y_{t-1}, \ldots, y_{t-k})=0`. x is equal to SIMPLE
if the block has only one equation. If several equation
appears in the block, x is equal to COMPLETE.
y_t, y_{t-1}, \ldots, y_{t-k})=0`. ``x`` is equal to ``SIMPLE``
if the block has only one equation. If several equations
appear in the block, ``x`` is equal to ``COMPLETE``.
 
* SOLVE FORWARD x
* ``SOLVE FORWARD x``
 
The block contains only equations where endogenous variable
attributed to the equation does not appear currently on the
The block contains only equations where the endogenous variable
attributed to the equation does not appear at current period on the
left hand side and where no backward looking endogenous
variables appear. The block has the form: :math:`g_j(y_{j,t},
y_t, y_{t+1}, \ldots, y_{t+k})=0`. x is equal to SIMPLE
if the block has only one equation. If several equation
appears in the block, x is equal to COMPLETE.
y_t, y_{t+1}, \ldots, y_{t+k})=0`. ``x`` is equal to ``SIMPLE``
if the block has only one equation. If several equations
appear in the block, ``x`` is equal to ``COMPLETE``.
 
* SOLVE TWO BOUNDARIES x
* ``SOLVE TWO BOUNDARIES x``
 
The block contains equations depending on both forward and
backward variables. The block looks like: :math:`g_j(y_{j,t},
y_t, y_{t-1}, \ldots, y_{t-k} ,y_t, y_{t+1}, \ldots,
y_{t+k})=0`. x is equal to SIMPLE if the block has only
one equation. If several equation appears in the block, x is
equal to COMPLETE.
y_{t+k})=0`. ``x`` is equal to ``SIMPLE`` if the block has only
one equation. If several equations appear in the block, ``x`` is
equal to ``COMPLETE``.
 
*Options*
 
.. option:: 'static'
.. option:: static
 
Prints out the block decomposition of the static
model. Without static option model_info displays the block
model. Without the ``static`` option, ``model_info`` displays the block
decomposition of the dynamic model.
 
.. option:: 'incidence'
.. option:: incidence
 
Displays the gross incidence matrix and the reordered incidence
matrix of the block decomposed model.
......@@ -3327,6 +3395,34 @@ blocks in the model structure and use this information to aid the
solution process. These solution algorithms can provide a significant
speed-up on large models.
 
.. warning:: Be careful when employing auxiliary variables in the context
of perfect foresight computations. The same model may work for stochastic
simulations, but fail for perfect foresight simulations. The issue arises
when an equation suddenly only contains variables dated ``t+1`` (or ``t-1``
for that matter). In this case, the derivative in the last (first) period
with respect to all variables will be 0, rendering the stacked Jacobian singular.
*Example*
Consider the following specification of an Euler equation with log utility:
::
Lambda = beta*C(-1)/C;
Lambda(+1)*R(+1)= 1;
Clearly, the derivative of the second equation with respect to all endogenous
variables at time ``t`` is zero, causing ``perfect_foresight_solver`` to generally
fail. This is due to the use of the Lagrange multiplier ``Lambda`` as an auxiliary
variable. Instead, employing the identical
::
beta*C/C(+1)*R(+1)= 1;
will work.
.. command:: perfect_foresight_setup ;
perfect_foresight_setup (OPTIONS...);
 
......@@ -3403,8 +3499,9 @@ speed-up on large models.
 
``0``
 
Newton method to solve simultaneously all the equations for
every period, using sparse matrices (Default).
Use a Newton algorithm with a direct sparse LU solver at each
iteration, applied on the stacked system of all the equations at
every period (Default).
 
``1``
 
......@@ -3428,7 +3525,7 @@ speed-up on large models.
 
``4``
 
Use a Newton algorithm with a optimal path length at
Use a Newton algorithm with an optimal path length at
each iteration (requires ``bytecode`` and/or ``block``
option, see :ref:`model-decl`).
 
......@@ -3502,16 +3599,27 @@ speed-up on large models.
the endogenous variables (such as a ZLB on the nominal interest
rate or a model with irreversible investment). This option is
equivalent to ``stack_solve_algo=7`` **and**
``solve_algo=10``. Using the LMMCP solver requires a particular
model setup as the goal is to get rid of any min/max operators
and complementary slackness conditions that might introduce a
singularity into the Jacobian. This is done by attaching an
equation tag (see :ref:`model-decl`) with the ``mcp`` keyword
to affected equations. This tag states that the equation to
which the tag is attached has to hold unless the expression
within the tag is binding. For instance, a ZLB on the nominal
interest rate would be specified as follows in the model
block::
``solve_algo=10``. Using the LMMCP solver avoids the need for min/max
operators and explicit complementary slackness conditions in the model
as they will typically introduce a singularity into the Jacobian. This is
done by setting the problem up as a mixed complementarity problem (MCP) of the form:
.. math::
LB = X &\Rightarrow F(X)>0\\
LB\leq X \leq UB &\Rightarrow F(X)=0\\
X =UB &\Rightarrow F(X)<0.
where :math:`X` denotes the vector of endogenous variables, :math:`F(X)` the equations
of the model, :math:`LB` denotes a lower bound, and :math:`UB` an upper bound. Such a setup
is implemented by attaching an equation tag (see :ref:`model-decl`)
with the ``mcp`` keyword to the affected equations. This tag states that
the equation to which the tag is attached has to hold unless the inequality
constraint within the tag is binding.
For instance, a ZLB on the nominal interest rate would be specified
as follows in the model block::
 
model;
...
......@@ -3529,20 +3637,27 @@ speed-up on large models.
slackness condition). By restricting the value of ``r`` coming
out of this equation, the ``mcp`` tag also avoids using
``max(r,-1.94478)`` for other occurrences of ``r`` in the rest
of the model. It is important to keep in mind that, because the
of the model. Two things are important to keep in mind. First, because the
``mcp`` tag effectively replaces a complementary slackness
condition, it cannot be simply attached to any
equation. Rather, it must be attached to the correct affected
equation as otherwise the solver will solve a different problem
than originally intended. Also, since the problem to be solved
is nonlinear, the sign of the residuals of the dynamic equation
matters. In the previous example, for the nominal interest rate
rule, if the LHS and RHS are reversed the sign of the residuals
(the difference between the LHS and the RHS) will change and it
may happen that solver fails to identify the solution path. More
generally, convergence of the nonlinear solver is not guaranteed
when using mathematically equivalent representations of the same
equation.
than originally intended. Second, the sign of the residual of the dynamic
equation must conform to the MCP setup outlined above. In case of the ZLB,
we are dealing with a lower bound. Consequently, the dynamic equation
needs to return a positive residual. Dynare by default computes the residual
of an equation ``LHS=RHS`` as ``residual=LHS-RHS``, while an implicit equation
``LHS`` is interpreted as ``LHS=0``. For the above equation this implies
``residual= r - (rho*r(-1) + (1-rho)*(gpi*Infl+gy*YGap) + e);``
which is correct, since it will be positive if the implied interest rate
``rho*r(-1) + (1-rho)*(gpi*Infl+gy*YGap) + e`` is
below ``r=-1.94478``. In contrast, specifying the equation as
``rho*r(-1) + (1-rho)*(gpi*Infl+gy*YGap) + e = r;```
would be wrong.
 
Note that in the current implementation, the content of the
``mcp`` equation tag is not parsed by the preprocessor. The
......@@ -3567,7 +3682,10 @@ speed-up on large models.
.. option:: linear_approximation
 
Solves the linearized version of the perfect foresight
model. The model must be stationary. Only available with option
model. The model must be stationary and a steady state
needs to be provided. Linearization is conducted about the
last defined steady state, which can derive from ``initval``,
``endval`` or a subsequent ``steady``. Only available with option
``stack_solve_algo==0`` or ``stack_solve_algo==7``.
 
*Output*
......@@ -3579,8 +3697,7 @@ speed-up on large models.
.. command:: simul ;
simul (OPTIONS...);
 
|br| Short-form command for triggering the computation of a
deterministic simulation of the model. It is strictly equivalent
|br| This command is deprecated. It is strictly equivalent
to a call to ``perfect_foresight_setup`` followed by a call to
``perfect_foresight_solver``.
 
......@@ -3597,8 +3714,10 @@ speed-up on large models.
periods option or by ``extended_path``). The variables are
arranged row by row, in order of declaration (as in
``M_.endo_names``). Note that this variable also contains initial
and terminal conditions, so it has more columns than the value of
``periods`` option.
and terminal conditions, so it has more columns than the value of the
``periods`` option: the first simulation period is in
column ``1+M_.maximum_lag``, and the total number of columns is
``M_.maximum_lag+periods+M_.maximum_lead``.
 
.. matvar:: oo_.exo_simul
 
......@@ -3608,7 +3727,25 @@ speed-up on large models.
in columns, in order of declaration (as in
``M_.exo_names``). Periods are in rows. Note that this convention
regarding columns and rows is the opposite of the convention for
``oo_.endo_simul``!
``oo_.endo_simul``! Also note that this variable also contains initial
and terminal conditions, so it has more rows than the value of the
``periods`` option: the first simulation period is in row
``1+M_.maximum_lag``, and the total number of rows is
``M_.maximum_lag+periods+M_.maximum_lead``.
.. matvar:: M_.maximum_lag
|br| The maximum number of lags in the model. Note that this value is
computed on the model *after* the transformations related to auxiliary
variables, so in practice it is either 1 or 0 (the latter value corresponds
to a purely forward or static model).
.. matvar:: M_.maximum_lead
|br| The maximum number of leads in the model. Note that this value is
computed on the model *after* the transformations related to auxiliary
variables, so in practice it is either 1 or 0 (the latter value corresponds
to a purely backward or static model).
 
 
.. _stoch-sol:
......@@ -3621,8 +3758,8 @@ corresponding to a random draw of the shocks.
 
The main algorithm for solving stochastic models relies on a Taylor
approximation, up to third order, of the expectation functions (see
*Judd (1996)*, *Collard and Juillard (2001a)*, *Collard and Juillard
(2001b)*, and *Schmitt-Grohé and Uríbe (2004)*). The details of the
*Judd (1996)*, *Collard and Juillard (2001a, 2001b)*, and
*Schmitt-Grohé and Uríbe (2004)*). The details of the
Dynare implementation of the first order solution are given in
*Villemot (2011)*. Such a solution is computed using the
``stoch_simul`` command.
......@@ -3634,6 +3771,8 @@ strong nonlinearities or binding constraints. Such a solution is
computed using the ``extended_path`` command.
 
 
.. _stoch-sol-simul:
Computing the stochastic solution
---------------------------------
 
......@@ -3891,7 +4030,7 @@ Computing the stochastic solution
option is greater than 1, the additional series will not be
used for computing the empirical moments but will simply be
saved in binary form to the file ``FILENAME_simul`` in the
``FILENAME/Output``-folder. Default:
``FILENAME/Output`` folder. Default:
``1``.
 
.. option:: solve_algo = INTEGER
......@@ -3929,8 +4068,8 @@ Computing the stochastic solution
``oo_.conditional_variance_decomposition_ME`` (see
:mvar:`oo_.conditional_variance_decomposition_ME`). The
variance decomposition is only conducted, if theoretical
moments are requested, *i.e.* using the ``periods=0``-option.
Only available at ``order<3`` and without ``pruning''. In case of ``order=2``,
moments are requested, *i.e.* using the ``periods=0`` option.
Only available at ``order<3`` and without ``pruning``. In case of ``order=2``,
Dynare provides a second-order accurate
approximation to the true second moments based on the linear
terms of the second-order solution (see *Kim, Kim,
......@@ -3974,7 +4113,7 @@ Computing the stochastic solution
Uses the default solver for Sylvester equations
(``gensylv``) based on Ondra Kamenik’s algorithm (see
`here
<https://www.dynare.org/assets/dynare++/sylvester.pdf>`__
<https://www.dynare.org/assets/team-presentations/sylvester.pdf>`__
for more information).
 
``fixed_point``
......@@ -4067,7 +4206,7 @@ Computing the stochastic solution
 
Triggers the computation and display of the theoretical
spectral density of the (filtered) model variables. Results are
stored in ´´oo_.SpectralDensity´´, defined below. Default: do
stored in ``oo_.SpectralDensity``, defined below. Default: do
not request spectral density estimates.
 
.. option:: hp_ngrid = INTEGER
......@@ -4133,7 +4272,8 @@ Computing the stochastic solution
variance-covariance of the endogenous variables. Contains
theoretical variance if the ``periods`` option is not present and simulated variance
otherwise. Only available for ``order<4``. At ``order=2`` it will be be
a second-order accurate approximation. At ``order=3``, theoretical moments
a second-order accurate approximation (i.e. ignoring terms of order 3 and 4 that would
arise when using the full second-order policy function). At ``order=3``, theoretical moments
are only available with ``pruning``. The variables are arranged in declaration order.
 
.. matvar:: oo_.var_list
......@@ -4370,7 +4510,7 @@ which is described below.
(2004)*), which allows to consider inequality constraints on
the endogenous variables (such as a ZLB on the nominal interest
rate or a model with irreversible investment). For specifying the
necessary ``mcp``-tag, see :opt:`lmmcp`.
necessary ``mcp`` tag, see :opt:`lmmcp`.
 
 
Typology and ordering of variables
......@@ -4619,16 +4759,432 @@ multidimensional indices of state variables, in such a way that symmetric
elements are never repeated (for more details, see the description of
``oo_.dr.g_3`` in the third-order case).
 
Occasionally binding constraints (OCCBIN)
=========================================
Dynare allows simulating models with up to two occasionally-binding constraints by
relying on a piecewise linear solution as in *Guerrieri and Iacoviello (2015)*.
It also allows estimating such models employing either the inversion filter of
*Cuba-Borda, Guerrieri, Iacoviello, and Zhong (2019)* or the piecewise Kalman filter of
*Giovannini, Pfeiffer, and Ratto (2021)*. To trigger computations involving
occasionally-binding constraints requires
#. defining and naming the occasionally-binding constraints using an ``occbin_constraints`` block
#. specifying the model equations for the respective regimes in the ``model`` block using appropriate equation tags.
#. potentially specifying a sequence of surprise shocks using a ``shocks(surprise)`` block
#. setting up Occbin simulations or estimation with ``occbin_setup``
#. triggering a simulation with ``occbin_solver`` or running ``estimation`` or ``calib_smoother``.
All of these elements are discussed in the following.
.. block:: occbin_constraints ;
|br| The ``occbin_constraints`` block specifies the occasionally-binding constraints. It contains
one or two of the following lines:
name 'STRING'; bind EXPRESSION; [relax EXPRESSION;] [error_bind EXPRESSION;] [error_relax EXPRESSION;]
``STRING`` is the name of constraint that is used to reference the constraint in ``relax`` / ``bind``
equation tags to identify the respective regime (see below). The ``bind`` expression is mandatory and defines
a logical condition that is evaluated in the baseline/steady state regime to check whether the specified
constraint becomes binding. In contrast, the ``relax`` expression is optional and specifies a
logical condition that is evaluated in the binding regime to check whether the regime returns
to the baseline/steady state regime. If not specified, Dynare will simply check in the binding
regime whether the ``bind`` expression evaluates to false. However, there are cases
where the ``bind`` expression cannot be evaluated in the binding regime(s), because
the variables involved are constant by definition so that e.g. the value of the Lagrange
multiplier on the complementary slackness condition needs to be checked. In these cases,
it is necessary to provide an explicit condition that can be evaluated in the binding
regime that allows to check whether it should be left.
Note that the baseline regime denotes the steady state of the model where the economy will
settle in the long-run without shocks. For that matter, it may be one where e.g. a borrowing
constraint is binding. In that type of setup, the ``bind`` condition is used to specify the
condition when this borrowing constraint becomes non-binding so that the alternative regime
is entered.
Three things are important to keep in mind when specifying the expressions.
First, feasible expressions may only contain contemporaneous endogenous variables.
If you want to include leads/lags or exogenous variables, you need to define
an auxiliary variable. Second, Dynare will at the current stage not linearly
approximate the entered expressions. Because Occbin will work with a linearized
model, consistency will often require the user to enter a linearized constraint.
Otherwise, the condition employed for checking constraint violations may differ
from the one employed within model simulations based on the piecewise-linear
model solution. Third, in contrast to the original Occbin replication codes, the
variables used in expressions are not automatically demeaned, i.e. they refer to
the levels, not deviations from the steady state. To access the steady state
level of a variable, the ``STEADY_STATE()`` operator can be used.
Finally, it's worth keeping in mind that for each simulation period, Occbin will check
the respective conditions for whether the current regime should be left. Small numerical
differences from the cutoff point for a regime can sometimes lead to oscillations between
regimes and cause a spurious periodic solution. Such cases may be prevented by introducing
a small buffer between the two regimes, e.g.
::
occbin_constraints;
name 'ELB'; bind inom <= iss-1e8; relax inom > iss+1e-8;
end;
The ``error_bind`` and ``error_relax`` options are optional and allow specifying
numerical criteria for the size of the respective constraint violations employed
in numerical routines. By default, Dynare will simply use the absolute value of
the ``bind`` and ``relax`` inequalities. But occasionnally, user-specified
expressions perform better.
*Example*
::
occbin_constraints;
name 'IRR'; bind log_Invest-log(steady_state(Invest))<log(phi); relax Lambda<0;
name 'INEG'; bind log_Invest-log(steady_state(Invest))<0;
end;
IRR is a constraint for irreversible investment that becomes binding if investment drops below its
steady state by more than 0.025 percent in the non-binding regime. The constraint will be relaxed whenever
the associated Lagrange multiplier ``Lambda`` in the binding regime becomes negative. Note that the
constraint here takes on a linear form to be consistent with a piecewise linear model solution
The specification of the model equations belonging to the respective regimes is done in the ``model`` block,
with equation tags indicating to which regime a particular equation belongs. All equations that differ across
regimes must have a ``name`` tag attached to them that allows uniquely identifying different versions of the
same equation. The name of the constraints specified is then used in conjunction with a ``bind`` or ``relax``
tag to indicate to which regime a particular equation belongs. In case of more than one occasionally-binding
constraint, if an equation belongs to several regimes (e.g. both constraints binding), the
constraint name tags must be separated by a comma. If only one name tag is present,
the respective equation is assumed to hold for both states of the other constraint.
*Example*
::
[name='investment',bind='IRR,INEG']
(log_Invest - log(phi*steady_state(Invest))) = 0;
[name='investment',relax='IRR']
Lambda=0;
[name='investment',bind='IRR',relax='INEG']
(log_Invest - log(phi*steady_state(Invest))) = 0;
The three entered equations for the investment condition define the model
equation for all four possible combinations of the two constraints. The
first equation defines the model equation in the regime where both the
IRR and INEG constraint are binding. The second equation defines the
model equation for the regimes where the IRR constraint is non-binding,
regardless of whether the INEG constraint is binding or not. Finally,
the last equation defines the model equation for the final regime where the
IRR constraint is binding, but the INEG one is not.
.. block:: shocks(surprise) ;
shocks(surprise,overwrite);
|br| The ``shocks(surprise)`` block allows specifying a sequence of temporary changes in
the value of exogenous variables that in each period come as a surprise to agents, i.e.
are not anticipated. Note that to actually use the specified shocks in subsequent commands
like ``occbin_solver``, the block needs to be followed by a call to ``occbin_setup``.
The block mirrors the perfect foresight syntax in that it should contain one or more
occurrences of the following group of three lines::
var VARIABLE_NAME;
periods INTEGER[:INTEGER] [[,] INTEGER[:INTEGER]]...;
values DOUBLE | (EXPRESSION) [[,] DOUBLE | (EXPRESSION) ]...;
*Example* (with vector values and overwrite option)
::
shockssequence = randn(100,1)*0.02;
shocks(surprise,overwrite);
var epsilon;
periods 1:100;
values (shockssequence);
end;
.. command:: occbin_setup ;
occbin_setup (OPTIONS...);
|br| Prepares a simulation with occasionally binding constraints. This command
will also translate the contents of a ``shocks(surprise)`` block for use
in subsequent commands.
In order to conduct ``estimation`` with occasionally binding constraints, it needs to be
prefaced by a call to ``occbin_setup`` to trigger the use of either the inversion filter
or the piecewise Kalman filter (default). An issue that can arise in the context of
estimation is a structural shock dropping out of the model in a particular regime.
For example, at the zero lower bound on interest rates, the monetary policy shock
in the Taylor rule will not appear anymore. This may create a problem
of stochastic singularity if there are then more observables than shocks. To
avoid this issue, the data points for the zero interest rate should be set
to NaN and the standard deviation of the associated shock set to 0 for the
corresponding periods using the ``heteroskedastic_shocks`` block.
Note that models with unit roots will require the user to specify the ``diffuse_filter`` option as
otherwise Blanchard-Kahn errors will be triggered. For the piecewise Kalman filter, the
initialization steps in the diffuse filter will always rely on the model solved for the baseline
regime, without checking whether this is the actual regime in the first period(s).
*Example*
::
occbin_setup(likelihood_inversion_filter,smoother_inversion_filter);
estimation(smoother,heteroskedastic_filter,...);
The above piece of code sets up an estimation employing the inversion filter for both the likelihood
evaluation and the smoother, while also accounting for ``heteroskedastic_shocks`` using the
``heteroskedastic_filter`` option.
Be aware that Occbin has largely command-specific options, i.e. there are separate
options to control the behavior of Occbin when called by the smoother or when
computing the likelihood. These latter commands will not inherit the options
potentially previously set for simulations.
*Options*
.. option:: simul_periods = INTEGER
Number of periods of the simulation. Default: 100.
.. option:: simul_maxit = INTEGER
Maximum number of iterations when trying to find the regimes of the piecewise solution.
Default: 30.
.. option:: simul_check_ahead_periods = INTEGER
Number of periods for which to check ahead for return to the baseline regime.
This number should be chosen large enough, because Occbin requires the simulation
to return to the baseline regime at the end of time. Default: 200.
.. option:: simul_curb_retrench
Instead of basing the initial regime guess for the current iteration on the last iteration, update
the guess only one period at a time. This will slow down the iterations, but may lead to
more robust convergence behavior. Default: not enabled.
.. option:: simul_periodic_solution
Accept a periodic solution where the solution alternates between two sets of results
across iterations, i.e. is not found to be unique. This is sometimes caused by spurious numerical errors
that lead to oscillations between regiems and may be prevented by allowing for a small buffer in regime
transitions. Default: not enabled.
.. option:: simul_debug
Provide additional debugging information during solving. Default: not enabled.
.. option:: smoother_periods = INTEGER
Number of periods employed during the simulation when called by the smoother
(equivalent of ``simul_periods``). Default: 100.
.. option:: smoother_maxit = INTEGER
Maximum number of iterations employed during the simulation when called by the smoother
(equivalent of ``simul_maxit``). Default: 30.
.. option:: smoother_check_ahead_periods = INTEGER
Number of periods for which to check ahead for return to the baseline regime during the
simulation when called by the smoother (equivalent of ``simul_check_ahead_periods``). Default: 200.
.. option:: smoother_curb_retrench
Have the smoother invoke the ``simul_curb_retrench`` option during simulations.
Default: not enabled.
.. option:: smoother_periodic_solution
Accept periodic solution where solution alternates between two sets of results (equivalent of ``simul_periodic_solution``).
Default: not enabled.
.. option:: likelihood_periods = INTEGER
Number of periods employed during the simulation when computing the likelihood
(equivalent of ``simul_periods``). Default: 100.
.. option:: likelihood_maxit = INTEGER
Maximum number of iterations employed during the simulation when computing the likelihood
(equivalent of ``simul_maxit``). Default: 30.
.. option:: likelihood_check_ahead_periods = INTEGER
Number of periods for which to check ahead for return to the baseline regime during the
simulation when computing the likelihood (equivalent of ``simul_check_ahead_periods``). Default: 200.
.. option:: likelihood_curb_retrench
Have the likelihood computation invoke the ``simul_curb_retrench`` option during simulations.
Default: not enabled.
.. option:: likelihood_periodic_solution
Accept periodic solution where solution alternates between two sets of results (equivalent of ``simul_periodic_solution``).
Default: not enabled.
.. option:: likelihood_inversion_filter
Employ the inversion filter of *Cuba-Borda, Guerrieri, Iacoviello, and Zhong (2019)* when estimating
the model. Default: not enabled.
.. option:: likelihood_piecewise_kalman_filter
Employ the piecewise Kalman filter of *Giovannini, Pfeiffer, and Ratto (2021)* when estimating
the model. Note that this filter is incompatible with univariate Kalman filters, i.e. ``kalman_algo=2,4``.
Default: enabled.
.. option:: likelihood_max_kalman_iterations
Maximum number of iterations of the outer loop for the piecewise Kalman filter. Default: 10.
.. option:: smoother_inversion_filter
Employ the inversion filter of *Cuba-Borda, Guerrieri, Iacoviello, and Zhong (2019)* when running the
smoother. The underlying assumption is that the system starts at the steady state. In this case, the
inversion filter will provide the required smoother output. Default: not enabled.
.. option:: smoother_piecewise_kalman_filter
Employ the piecewise Kalman filter of *Giovannini, Pfeiffer, and Ratto (2021)* when running the
smoother. Default: enabled.
.. option:: filter_use_relaxation
Triggers relaxation within the guess and verify algorithm used in the update step of the piecewise
Kalman filter. When old and new guess regime differ to much, use a new guess closer to the previous guess.
In case of multiple solutions, tends to provide an occasionally binding regime with a shorter duration (typically preferable).
Specifying this option may slow down convergence. Default: not enabled.
*Output*
The paths for the exogenous variables are stored into
``options_.occbin.simul.SHOCKS``.
.. command:: occbin_solver ;
occbin_solver (OPTIONS...);
|br| Computes a simulation with occasionally-binding constraints based on
a piecewise-linear solution.
Note that ``occbin_setup`` must be called before this command in order for
the simulation to take into account previous ``shocks(surprise)`` blocks.
*Options*
.. option:: simul_periods = INTEGER
See :opt:`simul_periods <simul_periods = INTEGER>`.
.. option:: simul_maxit = INTEGER
See :opt:`simul_maxit <simul_maxit = INTEGER>`.
.. option:: simul_check_ahead_periods = INTEGER
See :opt:`simul_check_ahead_periods <simul_check_ahead_periods = INTEGER>`.
.. option:: simul_curb_retrench
See :opt:`simul_curb_retrench`.
.. option:: simul_debug
See :opt:`simul_debug`.
*Output*
The command outputs various objects into ``oo_.occbin``.
.. matvar:: oo_.occbin.simul.piecewise
|br| Matrix storing the simulations based on the piecewise-linear solution.
The variables are arranged by column, in order of declaration (as in
``M_.endo_names``), while the the rows correspond to the ``simul_periods``.
.. matvar:: oo_.occbin.simul.linear
|br| Matrix storing the simulations based on the linear solution, i.e. ignoring
the occasionally binding constraint(s). The variables are arranged column by column,
in order of declaration (as in ``M_.endo_names``), while the the rows correspond to
the ``simul_periods``.
.. matvar:: oo_.occbin.simul.shocks_sequence
|br| Matrix storing the shock sequence employed during the simulation. The shocks are arranged
column by column, with their order in ``M_.exo_names`` stored in ``oo_.occbin.exo_pos``. The
the rows correspond to the number of shock periods specified in a ``shocks(surprise)`` block, which
may be smaller than ``simul_periods``.
.. matvar:: oo_.occbin.simul.regime_history
|br| Structure storing information on the regime history, conditional on the shock that
happened in the respective period (stored along the rows). ``type`` is equal to either ``smoother``
or ``simul``, depending on whether the output comes from a run of simulations or the smoother.
The subfield ``regime`` contains
a vector storing the regime state, while the the subfield ``regimestart`` indicates the
expected start of the respective regime state. For example, if row 40 contains ``[1,0]`` for
``regime2`` and ``[1,6]`` for ``regimestart2``, it indicates that - after the shock in period 40
has occurred - the second constraint became binding (1) and is expected to revert to non-binding (0) after
six periods including the current one, i.e. period 45.
.. matvar:: oo_.occbin.simul.ys
|br| Vector of steady state values
.. command:: occbin_graph [VARIABLE_NAME...];
occbin_graph (OPTIONS...) [VARIABLE_NAME...];
|br| Plots a graph comparing the simulation results of the piecewise-linear solution
with the occasionally binding contraints to the linear solution ignoring the constraint.
*Options*
.. option:: noconstant
Omit the steady state in the graphs.
.. command:: occbin_write_regimes ;
occbin_write_regimes (OPTIONS...);
|br| Write the information on the regime history stored in ``oo_.occbin.simul.regime_history``
or ``oo_.occbin.smoother.regime_history`` into an Excel file stored in the ``FILENAME/Output`` folder.
*Options*
.. option:: periods = INTEGER
Number of periods for which to write the expected regime durations. Default: write all
available periods.
.. option:: filename = FILENAME
Name of the Excel file to write. Default: ``FILENAME_occbin_regimes``.
.. option:: simul
Selects the regime history from the last run of simulations. Default: enabled.
.. option:: smoother
Selects the regime history from the last run of the smoother. Default: use ``simul``.
 
.. _estim:
 
Estimation
==========
Estimation based on likelihood
==============================
 
Provided that you have observations on some endogenous variables, it
is possible to use Dynare to estimate some or all parameters. Both
maximum likelihood (as in *Ireland (2004)*) and Bayesian techniques
(as in *Rabanal and Rubio-Ramirez (2003)*, *Schorfheide (2000)* or
(as in *Fernández-Villaverde and Rubio-Ramírez (2004)*,
*Rabanal and Rubio-Ramirez (2003)*, *Schorfheide (2000)* or
*Smets and Wouters (2003)*) are available. Using Bayesian methods, it
is possible to estimate DSGE models, VAR models, or a combination of
the two techniques called DSGE-VAR.
......@@ -4640,11 +5196,13 @@ observed variables.
The estimation using a first order approximation can benefit from the
block decomposition of the model (see :opt:`block`).
 
.. _varobs:
.. command:: varobs VARIABLE_NAME...;
 
|br| This command lists the name of observed endogenous variables
for the estimation procedure. These variables must be available in
the data file (see :ref:`estimation_cmd <estim-comm>`).
the data file (see :ref:`estimation <estim-comm>`).
 
Alternatively, this command is also used in conjunction with the
``partial_information`` option of ``stoch_simul``, for declaring
......@@ -4699,7 +5257,6 @@ block decomposition of the model (see :opt:`block`).
P (mu/eta);
end;
 
.. block:: estimated_params ;
 
|br| This block lists all parameters to be estimated and specifies
......@@ -4707,12 +5264,12 @@ block decomposition of the model (see :opt:`block`).
 
Each line corresponds to an estimated parameter.
 
In a maximum likelihood estimation, each line follows this syntax::
In a maximum likelihood or a method of moments estimation, each line follows this syntax::
 
stderr VARIABLE_NAME | corr VARIABLE_NAME_1, VARIABLE_NAME_2 | PARAMETER_NAME
, INITIAL_VALUE [, LOWER_BOUND, UPPER_BOUND ];
 
In a Bayesian estimation, each line follows this syntax::
In a Bayesian MCMC or a penalized method of moments estimation, each line follows this syntax::
 
stderr VARIABLE_NAME | corr VARIABLE_NAME_1, VARIABLE_NAME_2 | PARAMETER_NAME | DSGE_PRIOR_WEIGHT
[, INITIAL_VALUE [, LOWER_BOUND, UPPER_BOUND]], PRIOR_SHAPE,
......@@ -4737,7 +5294,7 @@ block decomposition of the model (see :opt:`block`).
associated with endogenous observed variables
VARIABLE_NAME1 and VARIABLE_NAME2, is to be
estimated. Note that correlations set by previous
shocks-blocks or estimation-commands are kept at their
``shocks`` blocks or estimation commands are kept at their
value set prior to estimation if they are not estimated
again subsequently. Thus, the treatment is the same as in
the case of deep parameters set during model calibration
......@@ -4859,8 +5416,7 @@ block decomposition of the model (see :opt:`block`).
 
Sets the same generalized beta distribution as before, but now
truncates this distribution to ``[-0.5,1]`` through the use of
LOWER_BOUND and UPPER_BOUND. Hence, the prior does not
integrate to ``1`` anymore.
LOWER_BOUND and UPPER_BOUND.
 
*Parameter transformation*
 
......@@ -4945,7 +5501,7 @@ block decomposition of the model (see :opt:`block`).
* Posterior mean and highest posterior density interval (shortest
credible set) from posterior simulation
* Convergence diagnostic table when only one MCM chain is used or
Metropolis-Hastings convergence graphs documented in *Pfeiffer
Metropolis-Hastings convergence graphs documented in *Pfeifer
(2014)* in case of multiple MCM chains
* Table with numerical inefficiency factors of the MCMC
* Graphs with prior, posterior, and mode
......@@ -4994,6 +5550,16 @@ block decomposition of the model (see :opt:`block`).
 
so that the sequence of proposals will be different across different runs.
 
Finally, Dynare does not always properly distinguish between maximum
likelihood and Bayesian estimation in its field names. While there is
an important conceptual distinction between frequentist confidence intervals
and Bayesian highest posterior density intervals (HPDI) as well as between
posterior density and likelilhood, Dynare sometimes uses the Bayesian
terms as a stand-in in its display of maximum likelihood results. An
example is the storage of the output of the ``forecast`` option of
``estimation`` with ML, which will use ``HPDinf/HPDsup`` to denote
the confidence interval.
*Algorithms*
 
The Monte Carlo Markov Chain (MCMC) diagnostics are generated by
......@@ -5234,8 +5800,8 @@ block decomposition of the model (see :opt:`block`).
 
.. option:: conf_sig = DOUBLE
 
Confidence interval used for classical forecasting after
estimation. See :ref:`conf_sig <confsig>`.
Level of significance of the confidence interval used for classical forecasting after
estimation. Default: 0.9.
 
.. option:: mh_conf_sig = DOUBLE
 
......@@ -5298,7 +5864,7 @@ block decomposition of the model (see :opt:`block`).
achieve an acceptance rate of
:ref:`AcceptanceRateTarget<art>`. The resulting scale parameter
will be saved into a file named
``MODEL_FILENAME_mh_scale.mat`` in the ``FILENAME/Output``-folder.
``MODEL_FILENAME_mh_scale.mat`` in the ``FILENAME/Output`` folder.
This file can be loaded in
subsequent runs via the ``posterior_sampler_options`` option
:ref:`scale_file <scale-file>`. Both ``mode_compute=6`` and
......@@ -5371,20 +5937,22 @@ block decomposition of the model (see :opt:`block`).
crashed chain with the respective last random number generator
state is currently not supported.
 
.. option:: mh_mode = INTEGER
.. option:: mh_posterior_mode_estimation
 
...
Skip optimizer-based mode-finding and instead compute the mode based
on a run of a MCMC. The MCMC will start at the prior mode and use the prior
variances to compute the inverse Hessian.
 
.. option:: mode_file = FILENAME
 
Name of the file containing previous value for the mode. When
computing the mode, Dynare stores the mode (``xparam1``) and
the hessian (``hh``, only if ``cova_compute=1``) in a file
called ``MODEL_FILENAME_mode.mat`` in the ``FILENAME/Output``-folder.
called ``MODEL_FILENAME_mode.mat`` in the ``FILENAME/Output`` folder.
After a successful run of
the estimation command, the ``mode_file`` will be disabled to
prevent other function calls from implicitly using an updated
``mode-file``. Thus, if the mod-file contains subsequent
mode file. Thus, if the ``.mod`` file contains subsequent
``estimation`` commands, the ``mode_file`` option, if desired,
needs to be specified again.
 
......@@ -5498,7 +6066,7 @@ block decomposition of the model (see :opt:`block`).
routine (available under MATLAB if the Optimization Toolbox is
installed; available under Octave if the `optim
<https://octave.sourceforge.io/optim/>`__ package from
Octave-Forge is installed).
Octave-Forge is installed). Only supported for ``method_of_moments``.
 
``101``
 
......@@ -6135,6 +6703,17 @@ block decomposition of the model (see :opt:`block`).
Ratto, and Rossi (2015)*. Note that ``'slice'`` is
incompatible with ``prior_trunc=0``.
 
Whereas one Metropolis-Hastings iteration requires one
evaluation of the posterior, one slice iteration requires :math:`neval`
evaluations, where as a rule of thumb :math:`neval=7\times npar` with
:math:`npar` denoting the number of estimated parameters. Spending
the same computational budget of :math:`N` posterior evaluations in the
slice sampler then implies setting ``mh_replic=N/neval``.
Note that the slice sampler will typically return less autocorrelated Monte Carlo Markov
Chain draws than the MH-algorithm. Its relative (in)efficiency can be investigated via
the reported inefficiency factors.
.. option:: posterior_sampler_options = (NAME, VALUE, ...)
 
A list of NAME and VALUE pairs. Can be used to set options for
......@@ -6483,7 +7062,9 @@ block decomposition of the model (see :opt:`block`).
Koopman (2012)* and *Koopman and Durbin (2003)* for the
multivariate and *Koopman and Durbin (2000)* for the univariate
filter) to estimate models with non-stationary observed
variables.
variables. This option will also reset the ``qz_criterium`` to
count unit root variables towards the stable variables. Trying to estimate
a model with unit roots will otherwise result in a Blanchard-Kahn error.
 
When ``diffuse_filter`` is used the ``lik_init`` option of
``estimation`` has no effect.
......@@ -6552,8 +7133,8 @@ block decomposition of the model (see :opt:`block`).
 
Order of approximation around the deterministic steady
state. When greater than 1, the likelihood is evaluated with a
particle or nonlinear filter (see *Fernandez-Villaverde and
Rubio-Ramirez (2005)*). Default is ``1``, i.e. the likelihood
particle or nonlinear filter (see *Fernández-Villaverde and
Rubio-Ramírez (2005)*). Default is ``1``, i.e. the likelihood
of the linearized model is evaluated using a standard Kalman
filter.
 
......@@ -6887,6 +7468,58 @@ block decomposition of the model (see :opt:`block`).
``nonlinear_filter_initialization=1`` (initialization based on
the first order approximation of the model).
 
.. option:: particle_filter_options = (NAME, VALUE, ...)
A list of NAME and VALUE pairs. Can be used to set some fine-grained
options for the particle filter routines. The set of available options
depends on the selected filter routine.
More information on particle filter options is available at
`<https://git.dynare.org/Dynare/dynare/-/wikis/Particle-filters>`__.
Available options are:
``'pruning'``
Enable pruning for particle filter-related simulations. Default: ``false``.
``'liu_west_delta'``
Set the value for delta for the Liu/West online filter. Default: ``0.99``.
``'unscented_alpha'``
Set the value for alpha for unscented transforms. Default: ``1``.
``'unscented_beta'``
Set the value for beta for unscented transforms. Default: ``2``.
``'unscented_kappa'``
Set the value for kappa for unscented transforms. Default: ``1``.
``'initial_state_prior_std'``
Value of the diagonal elements for the initial covariance of the state
variables when employing ``nonlinear_filter_initialization=3``. Default: ``1``.
``'mixture_state_variables'``
Number of mixture components in the Gaussian-mixture filter (gmf)
for the state variables. Default: ``5``.
``'mixture_structural_shocks'``
Number of mixture components in the Gaussian-mixture filter (gmf)
for the structural shocks. Default: ``1``.
``'mixture_measurement_shocks'``
Number of mixture components in the Gaussian-mixture filter (gmf)
for the measurement errors. Default: ``1``.
*Note*
 
If no ``mh_jscale`` parameter is used for a parameter in
......@@ -7348,7 +7981,7 @@ block decomposition of the model (see :opt:`block`).
``VarianceDecompositionME``
 
Same as `VarianceDecomposition`_, but contains
theh decomposition of the measured as opposed to the
the decomposition of the measured as opposed to the
actual variable. The joint contribution of the
measurement error will be saved in a field named
``ME``.
......@@ -7581,11 +8214,11 @@ block decomposition of the model (see :opt:`block`).
Relative numerical efficiency (RNE) under the assumption
of iid draws.
 
``nse_x``
``nse_taper_x``
 
Numerical standard error (NSE) when using an x% taper.
 
``rne_x``
``rne_taper_x``
 
Relative numerical efficiency (RNE) when using an x% taper.
 
......@@ -7634,38 +8267,620 @@ Dynare also has the ability to estimate Bayesian VARs:
See ``bvar-a-la-sims.pdf``, which comes with Dynare distribution,
for more information on this command.
 
.. command:: bvar_irf ;
 
|br| Computes the impulse responses of an estimated BVAR model, using
Minnesota priors.
 
Model Comparison
================
See ``bvar-a-la-sims.pdf``, which comes with Dynare distribution,
for more information on this command.
 
.. command:: model_comparison FILENAME[(DOUBLE)]...;
model_comparison (marginal_density = ESTIMATOR) FILENAME[(DOUBLE)]...;
 
|br| This command computes odds ratios and estimate a posterior density
over a collection of models (see e.g. *Koop (2003)*, Ch. 1). The
priors over models can be specified as the *DOUBLE* values,
otherwise a uniform prior over all models is assumed. In contrast
to frequentist econometrics, the models to be compared do not need
to be nested. However, as the computation of posterior odds ratios
is a Bayesian technique, the comparison of models estimated with
maximum likelihood is not supported.
Estimation based on moments
===========================
 
It is important to keep in mind that model comparison of this type
is only valid with proper priors. If the prior does not integrate
to one for all compared models, the comparison is not valid. This
may be the case if part of the prior mass is implicitly truncated
because Blanchard and Kahn conditions (instability or
indeterminacy of the model) are not fulfilled, or because for some
regions of the parameters space the deterministic steady state is
undefined (or Dynare is unable to find it). The compared marginal
densities should be renormalized by the effective prior mass, but
this not done by Dynare: it is the user’s responsibility to make
sure that model comparison is based on proper priors. Note that,
for obvious reasons, this is not an issue if the compared marginal
densities are based on Laplace approximations.
Provided that you have observations on some endogenous variables, it
is possible to use Dynare to estimate some or all parameters using a
method of moments approach. Both the Simulated Method of Moments (SMM)
and the Generalized Method of Moments (GMM) are available. The general
idea is to minimize the distance between unconditional model moments
and corresponding data moments (so called orthogonality or moment
conditions). For SMM, Dynare computes model moments via stochastic
simulations based on the perturbation approximation up to any order,
whereas for GMM model moments are computed in closed-form based on the
pruned state-space representation of the perturbation solution up to third
order. The implementation of SMM is inspired by *Born and Pfeifer (2014)*
and *Ruge-Murcia (2012)*, whereas the one for GMM is adapted from
*Andreasen, Fernández-Villaverde and Rubio-Ramírez (2018)* and *Mutschler
(2018)*. Successful estimation heavily relies on the accuracy and efficiency of
the perturbation approximation, so it is advised to tune this as much as
possible (see :ref:`stoch-sol-simul`). The method of moments estimator is consistent
and asymptotically normally distributed given certain regularity conditions
(see *Duffie and Singleton (1993)* for SMM and *Hansen (1982)* for GMM).
For instance, it is required to have at least as many moment conditions as
estimated parameters (over-identified or just identified). Moreover, the
Jacobian of the moments with respect to the estimated parameters needs to
have full rank. :ref:`identification-analysis` helps to check this regularity condition.
In the over-identified case of declaring more moment conditions than estimated parameters, the
choice of :opt:`weighting_matrix <weighting_matrix = ['WM1','WM2',...,'WMn']>`
matters for the efficiency of the estimation, because the estimated
orthogonality conditions are random variables with unequal variances and
usually non-zero cross-moment covariances. A weighting matrix allows to
re-weight moments to put more emphasis on moment conditions that are
more informative or better measured (in the sense of having a smaller
variance). To achieve asymptotic efficiency, the weighting matrix needs to
be chosen such that, after appropriate scaling, it has a probability limit
proportional to the inverse of the covariance matrix of the limiting
distribution of the vector of orthogonality conditions. Dynare uses a
Newey-West-type estimator with a Bartlett kernel to compute an estimate of this
so-called optimal weighting matrix. Note that in this over-identified case,
it is advised to perform the estimation in at least two stages by setting
e.g. :opt:`weighting_matrix=['DIAGONAL','DIAGONAL'] <weighting_matrix = ['WM1','WM2',...,'WMn']>`
so that the computation of the optimal weighting matrix benefits from the
consistent estimation of the previous stages. The optimal weighting matrix
is used to compute standard errors and the J-test of overidentifying
restrictions, which tests whether the model and selection of moment
conditions fits the data sufficiently well. If the null hypothesis of a
"valid" model is rejected, then something is (most likely) wrong with either your model
or selection of orthogonality conditions.
In case the (presumed) global minimum of the moment distance function is
located in a region of the parameter space that
is typically considered unlikely (`dilemma of absurd parameters`), you may
opt to choose the :opt:`penalized_estimator <penalized_estimator>` option.
Similar to adding priors to the likelihood, this option incorporates prior
knowledge (i.e. the prior mean) as additional moment restrictions and
weights them by their prior precision to guide the minimization algorithm
to more plausible regions of the parameter space. Ideally, these regions are
characterized by only slightly worse values of the objective function. Note that
adding prior information comes at the cost of a loss in efficiency of the estimator.
 
*Options*
.. command:: varobs VARIABLE_NAME...;
|br| Required. All variables used in the :bck:`matched_moments` block
need to be observable. See :ref:`varobs <varobs>` for more details.
.. block:: matched_moments ;
|br| This block specifies the product moments which are used in estimation.
Currently, only linear product moments (e.g.
:math:`E[y_t], E[y_t^2], E[x_t y_t], E[y_t y_{t-1}], E[y_t^3 x^2_{t-4}]`)
are supported. For other functions like :math:`E[\log(y_t)e^{x_t}]` you
need to declare auxiliary endogenous variables.
Each line inside of the block should be of the form::
VARIABLE_NAME(LEAD/LAG)^POWER*VARIABLE_NAME(LEAD/LAG)^POWER*...*VARIABLE_NAME(LEAD/LAG)^POWER;
where `VARIABLE_NAME` is the name of a declared observable variable,
`LEAD/LAG` is either a negative integer for lags or a positive one
for leads, and `POWER` is a positive integer indicating the exponent on
the variable. You can omit `LEAD/LAG` equal to `0` or `POWER` equal to `1`.
*Example*
For :math:`E[c_t], E[y_t], E[c_t^2], E[c_t y_t], E[y_t^2], E[c_t c_{t+3}], E[y_{t+1}^2 c^3_{t-4}], E[c^3_{t-5} y_{t}^2]`
use the following block:
::
matched_moments;
c;
y;
c*c;
c*y;
y^2;
c*c(3);
y(1)^2*c(-4)^3;
c(-5)^3*y(0)^2;
end;
*Limitations*
1. For GMM, Dynare can only compute the theoretical mean, covariance, and
autocovariances (i.e. first and second moments). Higher-order moments are only supported for SMM.
2. By default, the product moments are not demeaned, unless the
:opt:`prefilter <prefilter = INTEGER>` option is set to 1. That is, by default,
`c*c` corresponds to :math:`E[c_t^2]` and not to :math:`Var[c_t]=E[c_t^2]-E[c_t]^2`.
*Output*
Dynare translates the :bck:`matched_moments` block into a cell array
``M_.matched_moments`` where:
* the first column contains a vector of indices for the chosen variables in declaration order
* the second column contains the corresponding vector of leads and lags
* the third column contains the corresponding vector of powers
During the estimation phase, Dynare will eliminate all redundant or duplicate
orthogonality conditions in ``M_.matched_moments`` and display which
conditions were removed. In the example above, this would be the case for the
last row, which is the same as the second-to-last one. The original block is
saved in ``M_.matched_moments_orig``.
.. block:: estimated_params ;
|br| Required. See :bck:`estimated_params` for the meaning and syntax.
.. block:: estimated_params_init ;
|br| See :bck:`estimated_params_init` for the meaning and syntax.
.. block:: estimated_params_bounds ;
|br| See :bck:`estimated_params_bounds` for the meaning and syntax.
.. command:: method_of_moments (OPTIONS...);
|br| This command runs the method of moments estimation. The following
information will be displayed in the command window:
* Overview of options chosen by the user
* Estimation results for each stage and iteration
* Value of minimized moment distance objective function
* Result of the J-test
* Table of data moments and estimated model moments
*Necessary options*
.. option:: mom_method = SMM|GMM
"Simulated Method of Moments" is triggered by `SMM` and
"Generalized Method of Moments" by `GMM`.
.. option:: datafile = FILENAME
The name of the file containing the data. See
:opt:`datafile <datafile = FILENAME>` for the meaning and syntax.
*Options common for SMM and GMM*
.. option:: order = INTEGER
Order of perturbation approximation. For GMM only orders 1|2|3 are
supported. For SMM, you can choose an arbitrary order. Note that the
order set in other functions will not overwrite the default.
Default: ``1``.
.. option:: pruning
Discard higher order terms when iteratively computing simulations
of the solution. See :opt:`pruning <pruning>` for more details.
Default: not set for SMM, always set for GMM.
.. option:: penalized_estimator
This option includes deviations of the estimated parameters from the
prior mean as additional moment restrictions and weights them by
their prior precision.
Default: not set.
.. option:: weighting_matrix = ['WM1','WM2',...,'WMn']
Determines the weighting matrix used at each estimation stage. The number of elements
will define the number of stages, i.e. ``weighting_matrix = ['DIAGONAL','DIAGONAL','OPTIMAL']``
performs a three-stage estimation. Possible values for ``WM`` are:
``IDENTITY_MATRIX``
Sets the weighting matrix equal to the identity matrix.
``OPTIMAL``
Uses the optimal weighting matrix computed by a Newey-West-type
estimate with a Bartlett kernel. At the first
stage, the data-moments are used as initial estimate of the
model moments, whereas at subsequent stages the previous estimate
of model moments will be used when computing
the optimal weighting matrix.
``DIAGONAL``
Uses the diagonal of the ``OPTIMAL`` weighting matrix. This choice
puts weights on the specified moments instead of on their linear combinations.
``FILENAME``
The name of the MAT-file (extension ``.mat``) containing a
user-specified weighting matrix. The file must include a positive definite
square matrix called ``weighting_matrix`` with both dimensions
equal to the number of orthogonality conditions.
Default value is ``['DIAGONAL','OPTIMAL']``.
.. option:: weighting_matrix_scaling_factor = DOUBLE
Scaling of weighting matrix in objective function. This value should be chosen to
obtain values of the objective function in a reasonable numerical range to prevent
over- and underflows.
Default: ``1``.
.. option:: bartlett_kernel_lag = INTEGER
Bandwidth of kernel for computing the optimal weighting matrix.
Default: ``20``.
.. option:: se_tolx = DOUBLE
Step size for numerical differentiation when computing standard
errors with a two-sided finite difference method.
Default: ``1e-5``.
.. option:: verbose
Display and store intermediate estimation results in ``oo_.mom``.
Default: not set.
*SMM-specific options*
.. option:: burnin = INTEGER
Number of periods dropped at the beginning of simulation.
Default: ``500``.
.. option:: bounded_shock_support
Trim shocks in simulations to :math:`\pm 2` standard deviations.
Default: not set.
.. option:: seed = INTEGER
Common seed used in simulations.
Default: ``24051986``.
.. option:: simulation_multiple = INTEGER
Multiple of data length used for simulation.
Default: ``7``.
*GMM-specific options*
.. option:: analytic_standard_errors
Compute standard errors using analytical derivatives of moments
with respect to estimated parameters.
Default: not set, i.e. standard errors are computed using a two-sided
finite difference method, see :opt:`se_tolx <se_tolx = DOUBLE>`.
*General options*
.. option:: dirname = FILENAME
Directory in which to store ``estimation`` output.
See :opt:`dirname <dirname = FILENAME>` for more details.
Default: ``<mod_file>``.
.. option:: graph_format = FORMAT
Specify the file format(s) for graphs saved to disk.
See :opt:`graph_format <graph_format = FORMAT>` for more details.
Default: ``eps``.
.. option:: nodisplay
See :opt:`nodisplay`. Default: not set.
.. option:: nograph
See :opt:`nograph`. Default: not set.
.. option:: noprint
See :opt:`noprint`. Default: not set.
.. option:: plot_priors = INTEGER
Control the plotting of priors.
See :opt:`plot_priors <plot_priors = INTEGER>` for more details.
Default: ``1``, i.e. plot priors.
.. option:: prior_trunc = DOUBLE
See :opt:`prior_trunc <prior_trunc = DOUBLE>` for more details.
Default: ``1e-10``.
.. option:: tex
See :opt:`tex`. Default: not set.
*Data options*
.. option:: first_obs = INTEGER
See :opt:`first_obs <first_obs = INTEGER>`.
Default: ``1``.
.. option:: nobs = INTEGER
See :opt:`nobs <nobs = INTEGER>`.
Default: all observations are considered.
.. option:: prefilter = INTEGER
A value of 1 means that the estimation procedure will demean each data
series by its empirical mean and each model moment by its theoretical
mean. See :opt:`prefilter <prefilter = INTEGER>` for more details.
Default: ``0``, i.e. no prefiltering.
.. option:: logdata
See :opt:`logdata <logdata>`. Default: not set.
.. option:: xls_sheet = QUOTED_STRING
See :opt:`xls_sheet <xls_sheet = QUOTED_STRING>`.
.. option:: xls_range = RANGE
See :opt:`xls_range <xls_range = RANGE>`.
*Optimization options*
.. option:: huge_number = DOUBLE
See :opt:`huge_number <huge_number = DOUBLE>`.
Default: ``1e7``.
.. option:: mode_compute = INTEGER | FUNCTION_NAME
See :opt:`mode_compute <mode_compute = INTEGER | FUNCTION_NAME>`.
Default: ``13``, i.e. ``lsqnonlin`` if the MATLAB Optimization Toolbox or
the Octave optim-package are present, ``4``, i.e. ``csminwel`` otherwise.
.. option:: additional_optimizer_steps = [INTEGER]
additional_optimizer_steps = [INTEGER1:INTEGER2]
additional_optimizer_steps = [INTEGER1 INTEGER2]
Vector of additional minimization algorithms run after
``mode_compute``. If :opt:`verbose` option is set, then the additional estimation
results are saved into the ``oo_.mom`` structure prefixed with `verbose_`.
Default: no additional optimization iterations.
.. option:: optim = (NAME, VALUE, ...)
See :opt:`optim <optim = (NAME, VALUE, ...)>`.
.. option:: silent_optimizer
See :opt:`silent_optimizer`.
Default: not set.
*Numerical algorithms options*
.. option:: aim_solver
See :opt:`aim_solver <aim_solver>`. Default: not set.
.. option:: k_order_solver
See :opt:`k_order_solver <k_order_solver>`.
Default: disabled for order 1 and 2, enabled for order 3 and above.
.. option:: dr = OPTION
See :opt:`dr <dr = OPTION>`. Default: ``default``, i.e. generalized
Schur decomposition.
.. option:: dr_cycle_reduction_tol = DOUBLE
See :opt:`dr_cycle_reduction_tol <dr_cycle_reduction_tol = DOUBLE>`.
Default: ``1e-7``.
.. option:: dr_logarithmic_reduction_tol = DOUBLE
See :opt:`dr_logarithmic_reduction_tol <dr_logarithmic_reduction_tol = DOUBLE>`.
Default: ``1e-12``.
.. option:: dr_logarithmic_reduction_maxiter = INTEGER
See :opt:`dr_logarithmic_reduction_maxiter <dr_logarithmic_reduction_maxiter = INTEGER>`.
Default: ``100``.
.. option:: lyapunov = OPTION
See :opt:`lyapunov <lyapunov = OPTION>`. Default: ``default``, i.e.
based on Bartlets-Stewart algorithm.
.. option:: lyapunov_complex_threshold = DOUBLE
See :opt:`lyapunov_complex_threshold <lyapunov_complex_threshold = DOUBLE>`.
Default: ``1e-15``.
.. option:: lyapunov_fixed_point_tol = DOUBLE
See :opt:`lyapunov_fixed_point_tol <lyapunov_fixed_point_tol = DOUBLE>`.
Default: ``1e-10``.
.. option:: lyapunov_doubling_tol = DOUBLE
See :opt:`lyapunov_doubling_tol <lyapunov_doubling_tol = DOUBLE>`.
Default: ``1e-16``.
.. option:: sylvester = OPTION
See :opt:`sylvester <sylvester = OPTION>`.
Default: ``default``, i.e. uses ``gensylv``.
.. option:: sylvester_fixed_point_tol = DOUBLE
See :opt:`sylvester_fixed_point_tol <sylvester_fixed_point_tol = DOUBLE>`.
Default: ``1e-12``.
.. option:: qz_criterium = DOUBLE
See :opt:`qz_criterium <qz_criterium = DOUBLE>`.
Default: ``0.999999`` as it is assumed that the observables are weakly
stationary.
.. option:: qz_zero_threshold = DOUBLE
See :opt:`qz_zero_threshold <qz_zero_threshold = DOUBLE>`.
Default: ``1e-6``.
.. option:: schur_vec_tol = DOUBLE
Tolerance level used to find nonstationary variables in Schur decomposition
of the transition matrix. Default: ``1e-11``.
.. option:: mode_check
Plots univariate slices through the moments distance objective function around the
computed minimum for each estimated parameter. This is
helpful to diagnose problems with the optimizer.
Default: not set.
.. option:: mode_check_neighbourhood_size = DOUBLE
See :opt:`mode_check_neighbourhood_size <mode_check_neighbourhood_size = DOUBLE>`.
Default: ``0.5``.
.. option:: mode_check_symmetric_plots = INTEGER
See :opt:`mode_check_symmetric_plots <mode_check_symmetric_plots = INTEGER>`.
Default: ``1``.
.. option:: mode_check_number_of_points = INTEGER
See :opt:`mode_check_number_of_points <mode_check_number_of_points = INTEGER>`.
Default: ``20``.
*Output*
``method_of_moments`` stores user options in a structure called
`options_mom_` in the global workspace. After running the estimation,
the parameters ``M_.params`` and the covariance matrices of the shocks
``M_.Sigma_e`` and of the measurement errors ``M_.H`` are set to the
parameters that minimize the quadratic moments distance objective
function. The estimation results are stored in the ``oo_.mom`` structure
with the following fields:
.. matvar:: oo_.mom.data_moments
Variable set by the ``method_of_moments`` command. Stores the mean
of the selected empirical moments of data. NaN values due to leads/lags
or missing data are omitted when computing the mean. Vector of dimension
equal to the number of orthogonality conditions.
.. matvar:: oo_.mom.m_data
Variable set by the ``method_of_moments`` command. Stores the selected
empirical moments at each point in time. NaN values due to leads/lags
or missing data are replaced by the corresponding mean of the moment.
Matrix of dimension time periods times number of orthogonality conditions.
.. matvar:: oo_.mom.Sw
Variable set by the ``method_of_moments`` command. Stores the
Cholesky decomposition of the currently used weighting matrix.
Square matrix of dimensions equal to the number of orthogonality
conditions.
.. matvar:: oo_.mom.model_moments
Variable set by the ``method_of_moments`` command. Stores the implied
selected model moments given the current parameter guess. Model moments
are computed in closed-form from the pruned state-space system for GMM,
whereas for SMM these are based on averages of simulated data. Vector of dimension equal
to the number of orthogonality conditions.
.. matvar:: oo_.mom.Q
Variable set by the ``method_of_moments`` command. Stores the scalar
value of the quadratic moment's distance objective function.
.. matvar:: oo_.mom.model_moments_params_derivs
Variable set by the ``method_of_moments`` command. Stores the analytically
computed Jacobian matrix of the derivatives of the model moments with
respect to the estimated parameters. Only for GMM with :opt:`analytic_standard_errors`.
Matrix with dimension equal to the number of orthogonality conditions
times number of estimated parameters.
.. matvar:: oo_.mom.gmm_stage_*_mode
.. matvar:: oo_.mom.smm_stage_*_mode
.. matvar:: oo_.mom.verbose_gmm_stage_*_mode
.. matvar:: oo_.mom.verbose_smm_stage_*_mode
Variables set by the ``method_of_moments`` command when estimating
with GMM or SMM. Stores the estimated values at stages 1, 2,....
The structures contain the following fields:
- ``measurement_errors_corr``: estimated correlation between two measurement errors
- ``measurement_errors_std``: estimated standard deviation of measurement errors
- ``parameters``: estimated model parameters
- ``shocks_corr``: estimated correlation between two structural shocks.
- ``shocks_std``: estimated standard deviation of structural shocks.
If the :opt:`verbose` option is set, additional fields prefixed with
``verbose_`` are saved for all :opt:`additional_optimizer_steps<additional_optimizer_steps = [INTEGER]>`.
.. matvar:: oo_.mom.gmm_stage_*_std_at_mode
.. matvar:: oo_.mom.smm_stage_*_std_at_mode
.. matvar:: oo_.mom.verbose_gmm_stage_*_std_at_mode
.. matvar:: oo_.mom.verbose_smm_stage_*_std_at_mode
Variables set by the ``method_of_moments`` command when estimating
with GMM or SMM. Stores the estimated standard errors at stages 1, 2,....
The structures contain the following fields:
- ``measurement_errors_corr``: standard error of estimated correlation between two measurement errors
- ``measurement_errors_std``: standard error of estimated standard deviation of measurement errors
- ``parameters``: standard error of estimated model parameters
- ``shocks_corr``: standard error of estimated correlation between two structural shocks.
- ``shocks_std``: standard error of estimated standard deviation of structural shocks.
If the :opt:`verbose` option is set, additional fields prefixed with
``verbose_`` are saved for all :opt:`additional_optimizer_steps<additional_optimizer_steps = [INTEGER]>`.
.. matvar:: oo_.mom.J_test
Variable set by the ``method_of_moments`` command. Structure where the
value of the test statistic is saved into a field called ``j_stat``, the
degress of freedom into a field called ``degrees_freedom`` and the p-value
of the test statistic into a field called ``p_val``.
Model Comparison
================
.. command:: model_comparison FILENAME[(DOUBLE)]...;
model_comparison (marginal_density = ESTIMATOR) FILENAME[(DOUBLE)]...;
|br| This command computes odds ratios and estimate a posterior density
over a collection of models (see e.g. *Koop (2003)*, Ch. 1). The
priors over models can be specified as the *DOUBLE* values,
otherwise a uniform prior over all models is assumed. In contrast
to frequentist econometrics, the models to be compared do not need
to be nested. However, as the computation of posterior odds ratios
is a Bayesian technique, the comparison of models estimated with
maximum likelihood is not supported.
It is important to keep in mind that model comparison of this type
is only valid with proper priors. If the prior does not integrate
to one for all compared models, the comparison is not valid. This
may be the case if part of the prior mass is implicitly truncated
because Blanchard and Kahn conditions (instability or
indeterminacy of the model) are not fulfilled, or because for some
regions of the parameters space the deterministic steady state is
undefined (or Dynare is unable to find it). The compared marginal
densities should be renormalized by the effective prior mass, but
this not done by Dynare: it is the user’s responsibility to make
sure that model comparison is based on proper priors. Note that,
for obvious reasons, this is not an issue if the compared marginal
densities are based on Laplace approximations.
*Options*
 
.. option:: marginal_density = ESTIMATOR
 
......@@ -8152,7 +9367,7 @@ Shock Decomposition
 
.. option:: write_xls
 
Saves shock decompositions to Excel-file in the main
Saves shock decompositions to Excel file in the main
directory, named
``FILENAME_shock_decomposition_TYPE_FIG_NAME.xls``. This
option requires your system to be configured to be able to
......@@ -8361,7 +9576,7 @@ Shock Decomposition
 
.. option:: write_xls
 
Saves shock decompositions to Excel-file in the main
Saves shock decompositions to Excel file in the main
directory, named
``FILENAME_shock_decomposition_TYPE_FIG_NAME_initval.xls``. This
option requires your system to be configured to be able to
......@@ -8588,7 +9803,7 @@ the :comm:`bvar_forecast` command.
 
Variable set by the ``forecast`` command, or by the
``estimation`` command if used with the ``forecast`` option
and if no Metropolis-Hastings has been computed (in that case,
and ML or if no Metropolis-Hastings has been computed (in that case,
the forecast is computed for the posterior mode). Fields are
of the form::
 
......@@ -8600,38 +9815,34 @@ the :comm:`bvar_forecast` command.
 
Lower bound of a 90% HPD interval [#f8]_ of forecast
due to parameter uncertainty, but ignoring the effect
of measurement error on observed variables.
of measurement error on observed variables. In case of ML,
it stores the lower bound of the confidence interval.
 
``HPDsup``
 
Upper bound of a 90% HPD forecast interval due to
parameter uncertainty, but ignoring the effect of
measurement error on observed variables.
measurement error on observed variables. In case of ML,
it stores the upper bound of the confidence interval.
 
``HPDinf_ME``
 
Lower bound of a 90% HPD interval [#f9]_ of forecast
for observed variables due to parameter uncertainty
and measurement error.
and measurement error. In case of ML,
it stores the lower bound of the confidence interval.
 
``HPDsup_ME``
 
Upper bound of a 90% HPD interval of forecast for
observed variables due to parameter uncertainty and
measurement error.
measurement error. In case of ML,
it stores the upper bound of the confidence interval.
 
``Mean``
 
Mean of the posterior distribution of forecasts.
 
``Median``
Median of the posterior distribution of forecasts.
``Std``
Standard deviation of the posterior distribution of forecasts.
.. matvar:: oo_.PointForecast
 
Set by the ``estimation`` command, if it is used with the
......@@ -8667,7 +9878,7 @@ the :comm:`bvar_forecast` command.
variables. This is done using the reduced form first order
state-space representation of the DSGE model by finding the
structural shocks that are needed to match the restricted
paths. Consider the an augmented state space representation that
paths. Consider the augmented state space representation that
stacks both predetermined and non-predetermined variables into a
vector :math:`y_{t}`:
 
......@@ -8675,43 +9886,83 @@ the :comm:`bvar_forecast` command.
 
y_t=Ty_{t-1}+R\varepsilon_t
 
Both :math:`y_t` and :math:`\varepsilon_t` are split up into
controlled and uncontrolled ones to get:
Both :math:`y_t` and :math:`\varepsilon_t` are split up into controlled and
uncontrolled ones, and we assume without loss of generality that the
constrained endogenous variables and the controlled shocks come first :
.. math::
\begin{pmatrix}
y_{c,t}\\
y_{u,t}
\end{pmatrix}
=
\begin{pmatrix}
T_{c,c} & T_{c,u}\\
T_{u,c} & T_{u,u}
\end{pmatrix}
\begin{pmatrix}
y_{c,t-1}\\
y_{u,t-1}
\end{pmatrix}
+
\begin{pmatrix}
R_{c,c} & R_{c,u}\\
R_{u,c} & R_{u,u}
\end{pmatrix}
\begin{pmatrix}
\varepsilon_{c,t}\\
\varepsilon_{u,t}
\end{pmatrix}
where matrices :math:`T` and :math:`R` are partitioned consistently with the
vectors of endogenous variables and innovations. Provided that matrix
:math:`R_{c,c}` is square and full rank (a necessary condition is that the
number of free endogenous variables matches the number of free innovations),
given :math:`y_{c,t}`, :math:`\varepsilon_{u,t}` and :math:`y_{t-1}` the
first block of equations can be solved for :math:`\varepsilon_{c,t}`:
 
.. math::
 
y_t(contr\_vars)=Ty_{t-1}(contr\_vars)+R(contr\_vars,uncontr\_shocks)\varepsilon_t(uncontr\_shocks) \\
+ R(contr\_vars,contr\_shocks)\varepsilon_t(contr\_shocks)
which can be solved algebraically for :math:`\varepsilon_t(contr\_shocks)`.
Using these controlled shocks, the state-space representation can
be used for forecasting. A few things need to be noted. First, it
is assumed that controlled exogenous variables are fully under
control of the policy maker for all forecast periods and not just
for the periods where the endogenous variables are controlled. For
all uncontrolled periods, the controlled exogenous variables are
assumed to be 0. This implies that there is no forecast
uncertainty arising from these exogenous variables in uncontrolled
periods. Second, by making use of the first order state space
solution, even if a higher-order approximation was performed, the
conditional forecasts will be based on a first order
approximation. Third, although controlled exogenous variables are
taken as instruments perfectly under the control of the
policy-maker, they are nevertheless random and unforeseen shocks
from the perspective of the households. That is, households are in
each period surprised by the realization of a shock that keeps the
controlled endogenous variables at their respective level. Fourth,
keep in mind that if the structural innovations are correlated,
because the calibrated or estimated covariance matrix has non zero
off diagonal elements, the results of the conditional forecasts
will depend on the ordering of the innovations (as declared after
``varexo``). As in VAR models, a Cholesky decomposition is used to
factorize the covariance matrix and identify orthogonal
impulses. It is preferable to declare the correlations in the
model block (explicitly imposing the identification restrictions),
unless you are satisfied with the implicit identification
restrictions implied by the Cholesky decomposition.
\varepsilon_{c,t} = R_{c,c}^{-1}\bigl( y_{c,t} - T_{c,c}y_{c,t} - T_{c,u}y_{u,t} - R_{c,u}\varepsilon_{u,t}\bigr)
and :math:`y_{u,t}` can be updated by evaluating the second block of equations:
.. math::
y_{u,t} = T_{u,c}y_{c,t-1} + T_{u,u}y_{u,t-1} + R_{u,c}\varepsilon_{c,t} + R_{u,u}\varepsilon_{u,t}
By iterating over these two blocks of equations, we can build a forecast for
all the endogenous variables in the system conditional on paths for a subset of the
endogenous variables. If the distribution of the free innovations
:math:`\varepsilon_{u,t}` is provided (*i.e.* some of them have positive
variances) this exercise is replicated (the number of replication is
controlled by the option :opt:`replic` described below) by drawing different
sequences of free innovations. The result is a predictive distribution for
the uncontrolled endogenous variables, :math:`y_{u,t}`, that Dynare will use to report
confidence bands around the point conditional forecast.
A few things need to be noted. First, the controlled
exogenous variables are set to zero for the uncontrolled periods. This implies
that there is no forecast uncertainty arising from these exogenous variables
in uncontrolled periods. Second, by making use of the first order state
space solution, even if a higher-order approximation was performed, the
conditional forecasts will be based on a first order approximation. Since
the controlled exogenous variables are identified on the basis of the
reduced form model (*i.e.* after solving for the expectations), they are
unforeseen shocks from the perspective of the agents in the model. That is,
agents expect the endogenous variables to return to their respective steady
state levels but are surprised in each period by the realisation of shocks
keeping the endogenous variables along a predefined (unexpected) path.
Fourth, if the structural innovations are correlated, because the calibrated
or estimated covariance matrix has non zero off diagonal elements, the
results of the conditional forecasts will depend on the ordering of the
innovations (as declared after ``varexo``). As in VAR models, a Cholesky
decomposition is used to factorise the covariance matrix and identify
orthogonal impulses. It is preferable to declare the correlations in the
model block (explicitly imposing the identification restrictions), unless
you are satisfied with the implicit identification restrictions implied by
the Cholesky decomposition.
 
This command has to be called after ``estimation`` or ``stoch_simul``.
 
......@@ -8742,7 +9993,7 @@ the :comm:`bvar_forecast` command.
 
.. option:: replic = INTEGER
 
Number of simulations. Default: ``5000``.
Number of simulations used to compute the conditional forecast uncertainty. Default: ``5000``.
 
.. option:: conf_sig = DOUBLE
 
......@@ -8937,12 +10188,12 @@ The forecast scenario can contain some simple shocks on the exogenous
variables. This shocks are described using the function
``basic_plan``:
 
.. matcomm:: HANDLE = basic_plan (HANDLE, `VAR_NAME', `SHOCK_TYPE', DATES, MATLAB VECTOR OF DOUBLE | [DOUBLE | EXPR [DOUBLE | EXPR] ] );
.. matcomm:: HANDLE = basic_plan (HANDLE, 'VAR_NAME', 'SHOCK_TYPE', DATES, MATLAB VECTOR OF DOUBLE);
 
Adds to the forecast scenario a shock on the exogenous variable
indicated between quotes in the second argument. The shock type
has to be specified in the third argument between quotes:
surprise in case of an unexpected shock or perfect_foresight
``'surprise'`` in case of an unexpected shock or ``'perfect_foresight'``
for a perfectly anticipated shock. The fourth argument indicates
the period of the shock using a dates class (see :ref:`dates class
members <dates-members>`). The last argument is the shock path
......@@ -8955,7 +10206,7 @@ compatible with the constrained path are in this case computed. In
other words, a conditional forecast is performed. This kind of shock
is described with the function ``flip_plan``:
 
.. matcomm:: HANDLE = flip_plan (HANDLE, `VAR_NAME', `VAR_NAME', `SHOCK_TYPE', DATES, MATLAB VECTOR OF DOUBLE | [DOUBLE | EXPR [DOUBLE | EXPR] ] );
.. matcomm:: HANDLE = flip_plan (HANDLE, 'VAR_NAME', 'VAR_NAME', 'SHOCK_TYPE', DATES, MATLAB VECTOR OF DOUBLE);
 
Adds to the forecast scenario a constrained path on the endogenous
variable specified between quotes in the second argument. The
......@@ -8964,8 +10215,8 @@ is described with the function ``flip_plan``:
values compatible with the constrained path on the endogenous
variable will be computed. The nature of the expectation on the
constrained path has to be specified in the fourth argument
between quotes: surprise in case of an unexpected path or
perfect_foresight for a perfectly anticipated path. The fifth
between quotes: ``'surprise'`` in case of an unexpected path or
``'perfect_foresight'`` for a perfectly anticipated path. The fifth
argument indicates the period where the path of the endogenous
variable is constrained using a dates class (see :ref:`dates class
members <dates-members>`). The last argument contains the
......@@ -8985,8 +10236,8 @@ computed with the command ``det_cond_forecast``:
argument. By default, the past values of the variables are equal
to their steady-state values. The initial date of the forecast can
be provided in the third argument. By default, the forecast will
start at the first date indicated in the ``init_plan
command``. This function returns a dset containing the historical
start at the first date indicated in the ``init_plan``
command. This function returns a dataset containing the historical
and forecast values for the endogenous and exogenous variables.
 
 
......@@ -9091,26 +10342,154 @@ commitment with ``ramsey_model``, for optimal policy under discretion
with ``discretionary_policy`` or for optimal simple rules with ``osr``
(also implying commitment).
 
.. command:: planner_objective MODEL_EXPRESSION ;
.. command:: planner_objective MODEL_EXPRESSION ;
|br| This command declares the policy maker objective, for use
with ``ramsey_model`` or ``discretionary_policy``.
You need to give the one-period objective, not the discounted
lifetime objective. The discount factor is given by the
``planner_discount`` option of ``ramsey_model`` and
``discretionary_policy``. The objective function can only contain
current endogenous variables and no exogenous ones. This
limitation is easily circumvented by defining an appropriate
auxiliary variable in the model.
With ``ramsey_model``, you are not limited to quadratic
objectives: you can give any arbitrary nonlinear expression.
With ``discretionary_policy``, the objective function must be quadratic.
.. command:: evaluate_planner_objective ;
This command computes, displays, and stores the value of the
planner objective function under Ramsey policy or discretion in
``oo_.planner_objective_value``. It will provide both unconditional welfare
and welfare conditional on the initial (i.e. period 0) values of the endogenous
and exogenous state variables inherited by the planner. In a deterministic context,
the respective initial values are set using ``initval`` or ``histval`` (depending
on the exact context).
In a stochastic context, if no initial state values have
been specified with ``histval``, their values are taken to be the steady state
values. Because conditional welfare is computed conditional on optimal
policy by the planner in the first endogenous period (period 1), it is conditional
on the information set in the period 1. This information set includes both the
predetermined states inherited from period 0 (specified via ``histval`` for both
endogenous and lagged exogenous states) as well as the period 1 values of
the exogenous shocks. The latter are specified using the perfect foresight syntax
of the ``shocks`` block.
At the current stage, the stochastic context does not support the ``pruning`` option.
At ``order>3``, only the computation of conditional welfare with steady state Lagrange
multipliers is supported. Note that at ``order=2``, the output is based on the second-order
accurate approximation of the variance stored in ``oo_.var``.
*Example (stochastic context)*
::
var a ...;
varexo u;
model;
a = rho*a(-1)+u+u(-1);
...
end;
histval;
u(0)=1;
a(0)=-1;
end;
shocks;
var u; stderr 0.008;
var u;
periods 1;
values 1;
end;
evaluate_planner_objective;
.. matvar:: oo_.planner_objective_value.unconditional
Scalar storing the value of unconditional welfare. In a perfect foresight context,
it corresponds to welfare in the long-run, approximated as welfare in the terminal
simulation period.
.. matvar:: oo_.planner_objective_value.conditional
In a perfect foresight context, this field will be a scalar storing the value of
welfare conditional on the specified initial condition and zero initial Lagrange
multipliers.
In a stochastic context, it will have two subfields:
.. matvar:: oo_.planner_objective_value.conditional.steady_initial_multiplier
Stores the value of the planner objective when the initial
Lagrange multipliers associated with the planner’s problem are set
to their steady state values (see :comm:`ramsey_policy`).
.. matvar:: oo_.planner_objective_value.conditional.zero_initial_multiplier
Stores the value of the planner objective when the initial
Lagrange multipliers associated with the planner’s problem are set
to 0, i.e. it is assumed that the planner exploits its
ability to surprise private agents in the first period of
implementing Ramsey policy. This value corresponds to the planner
implementing optimal policy for the first time and committing not to
re-optimize in the future.
Optimal policy under commitment (Ramsey)
----------------------------------------
Dynare allows to automatically compute optimal policy choices of a Ramsey planner
who takes the specified private sector equilibrium conditions into account and commits
to future policy choices. Doing so requires specifying the private sector equilibrium
conditions in the ``model`` block and a ``planner_objective`` as well as potentially some
``instruments`` to facilitate computations.
.. warning:: Be careful when employing forward-looking auxiliary variables in the context
of timeless perspective Ramsey computations. They may alter the problem the Ramsey
planner will solve for the first period, although they seemingly leave the private
sector equilibrium unaffected. The reason is the planner optimizes with respect to variables
dated ``t`` and takes the value of time 0 variables as given, because they are predetermined.
This set of initially predetermined variables will change with forward-looking definitions.
Thus, users are strongly advised to use model-local variables instead.
*Example*
Consider a perfect foresight example where the Euler equation for the
return to capital is given by
::
1/C=beta*1/C(+1)*(R(+1)+(1-delta))
The job of the Ramsey planner in period ``1`` is to choose :math:`C_1` and :math:`R_1`, taking as given
:math:`C_0`. The above equation may seemingly equivalently be written as
 
|br| This command declares the policy maker objective, for use
with ``ramsey_model`` or ``discretionary_policy``.
::
 
You need to give the one-period objective, not the discounted
lifetime objective. The discount factor is given by the
``planner_discount`` option of ``ramsey_model`` and
``discretionary_policy``. The objective function can only contain
current endogenous variables and no exogenous ones. This
limitation is easily circumvented by defining an appropriate
auxiliary variable in the model.
1/C=beta*1/C(+1)*(R_cap);
R_cap=R(+1)+(1-delta);
 
With ``ramsey_model``, you are not limited to quadratic
objectives: you can give any arbitrary nonlinear expression.
due to perfect foresight. However, this changes the problem of the Ramsey planner in the first period
to choosing :math:`C_1` and :math:`R_1`, taking as given both :math:`C_0` and :math:`R^{cap}_0`. Thus,
the relevant return to capital in the Euler equation of the first period is not a
choice of the planner anymore due to the forward-looking nature of the definition in the second line!
 
With ``discretionary_policy``, the objective function must be quadratic.
A correct specification would be to instead define ``R_cap`` as a model-local variable:
::
1/C=beta*1/C(+1)*(R_cap);
#R_cap=R(+1)+(1-delta);
 
Optimal policy under commitment (Ramsey)
----------------------------------------
 
.. command:: ramsey_model (OPTIONS...);
 
......@@ -9169,8 +10548,8 @@ Optimal policy under commitment (Ramsey)
problem and declared with the option ``instruments``. The initial value
of the instrument for steady state finding in this case is set with
``initval``. Note that computing and displaying steady state values
using the ``steady``-command or calls to ``resid`` must come after
the ``ramsey_model`` statement and the ``initval``-block.
using the ``steady`` command or calls to ``resid`` must come after
the ``ramsey_model`` statement and the ``initval`` block.
 
Note that choosing the instruments is partly a matter of interpretation and
you can choose instruments that are handy from a mathematical
......@@ -9193,38 +10572,18 @@ Optimal policy under commitment (Ramsey)
i > 0;
end;
 
.. command:: evaluate_planner_objective ;
This command computes, displays, and stores the value of the
planner objective function
under Ramsey policy in ``oo_.planner_objective_value``, given the
initial values of the endogenous state variables. If not specified
with ``histval``, they are taken to be at their steady state
values. The result is a 1 by 2 vector, where the first entry
stores the value of the planner objective when the initial
Lagrange multipliers associated with the planner’s problem are set
to their steady state values (see :comm:`ramsey_policy`).
In contrast, the second entry stores the value of the planner
objective with initial Lagrange multipliers of the planner’s
problem set to 0, i.e. it is assumed that the planner exploits its
ability to surprise private agents in the first period of
implementing Ramsey policy. This is the value of implementating
optimal policy for the first time and committing not to
re-optimize in the future.
.. command:: ramsey_policy [VARIABLE_NAME...];
ramsey_policy (OPTIONS...) [VARIABLE_NAME...];
 
|br| This command is formally equivalent to the calling sequence
|br| This command is deprecated and formally equivalent to the calling sequence
 
::
 
ramsey_model;
stoch_simul(order=1);
stoch_simul;
evaluate_planner_objective;
 
It computes the first order approximation of the
It computes an approximation of the
policy that maximizes the policy maker’s objective function
subject to the constraints provided by the equilibrium path of the
private economy and under commitment to this optimal policy. The
......@@ -9237,20 +10596,9 @@ Optimal policy under commitment (Ramsey)
around this steady state of the endogenous variables and the
Lagrange multipliers.
 
This first order approximation to the optimal policy conducted by
Dynare is not to be confused with a naive linear quadratic
approach to optimal policy that can lead to spurious welfare
rankings (see *Kim and Kim (2003)*). In the latter, the optimal
policy would be computed subject to the first order approximated
FOCs of the private economy. In contrast, Dynare first computes
the FOCs of the Ramsey planner’s problem subject to the nonlinear
constraints that are the FOCs of the private economy and only then
approximates these FOCs of planner’s problem to first
order. Thereby, the second order terms that are required for a
second-order correct welfare evaluation are preserved.
Note that the variables in the list after the ``ramsey_policy``
command can also contain multiplier names. In that case, Dynare
or ``stoch_simul`` command can also contain multiplier names, but
in a case-sensititve way (e.g. ``MULT_1``). In that case, Dynare
will for example display the IRFs of the respective multipliers
when ``irf>0``.
 
......@@ -9271,16 +10619,12 @@ Optimal policy under commitment (Ramsey)
``steady_state_model`` block or a ``_steadystate.m`` file. See
below.
 
Note that only a first order approximation of the optimal Ramsey
policy is available, leading to a second-order accurate welfare
ranking (i.e. ``order=1`` must be specified).
*Output*
 
This command generates all the output variables of
``stoch_simul``. For specifying the initial values for the
endogenous state variables (except for the Lagrange multipliers),
see :bck:`histval`.
see above.
 
 
*Steady state*
......@@ -9304,7 +10648,8 @@ Optimal policy under discretion
 
It is possible to use the :comm:`estimation` command after the
``discretionary_policy`` command, in order to estimate the model with
optimal policy under discretion.
optimal policy under discretion and :comm:`evaluate_planner_objective`
to compute welfare.
 
*Options*
 
......@@ -9954,7 +11299,10 @@ IRF and moment calibration can be defined in ``irf_calibration`` and
 
When ``(INTEGER:INTEGER)`` is used, the restriction is considered
to be fulfilled by a logical OR. A list of restrictions must
always be fulfilled with logical AND.
always be fulfilled with logical AND. The moment restrictions generally apply to
auto- and cross-correlations between variables. The only exception is a restriction
on the unconditional variance of an endogenous variable, specified as shown in
the example below.
 
*Example*
 
......@@ -9962,15 +11310,16 @@ IRF and moment calibration can be defined in ``irf_calibration`` and
 
moment_calibration;
y_obs,y_obs, [0.5, 1.5]; //[unconditional variance]
y_obs,y_obs(-(1:4)), +; //[sign restriction for first year acf with logical OR]
y_obs,y_obs(-(1:4)), +; //[sign restriction for first year autocorrelation with logical OR]
@#for ilag in -2:2
y_obs,R_obs(@{ilag}), -; //[-2:2 ccf with logical AND]
y_obs,R_obs(@{ilag}), -; //[-2:2 cross correlation with logical AND]
@#endfor
@#for ilag in -4:4
y_obs,pie_obs(@{ilag}), -; //[-4_4 ccf with logical AND]
y_obs,pie_obs(@{ilag}), -; //[-4_4 cross correlation with logical AND]
@#endfor
end;
 
.. _identification-analysis:
 
Performing identification analysis
----------------------------------
......@@ -10103,9 +11452,7 @@ Performing identification analysis
 
.. option:: schur_vec_tol = DOUBLE
 
Tolerance level used to find nonstationary variables in Schur decomposition
of the transition matrix.
Default: ``1.e-11``.
See :opt:`schur_vec_tol <schur_vec_tol = DOUBLE>`.
 
*Identification Strength Options*
 
......@@ -10802,7 +12149,6 @@ below.
UPPER_CHOLESKY;
LOWER_CHOLESKY;
EXCLUSION CONSTANTS;
EXCLUSION LAG INTEGER; VARIABLE_NAME [,VARIABLE_NAME...];
EXCLUSION LAG INTEGER; EQUATION INTEGER, VARIABLE_NAME [,VARIABLE_NAME...];
RESTRICTION EQUATION INTEGER, EXPRESSION = EXPRESSION;
 
......@@ -11490,6 +12836,624 @@ form:
end;
 
 
.. _semi-strutural:
Semi-structural models
======================
Dynare provides tools for semi-structural models, in the vain of the FRB/US
model (see *Brayton and Tinsley (1996)*), where expectations are not necessarily
model consistent but based on a VAR auxiliary model. In the following, it is
assumed that each equation is written as ``VARIABLE = EXPRESSION`` or
``T(VARIABLE) = EXPRESSION`` where ``T(VARIABLE)`` stands for a transformation
of an endogenous variable (``log`` or ``diff``). This representation, where each
equation determines the endogenous variable on the LHS, can be exploited when
simulating the model (see algorithms 12 and 14 in :ref:`solve_algo <solvalg>`)
and is mandatory to define auxiliary models used for computing expectations (see
below).
Auxiliary models
----------------
The two auxiliary models defined in this section are linear backward-looking models
used to form expectations. Both models can be recast as VAR(1)-processes and
therefore offer isomorphic ways of specifying the expectations process, but differ
in their convenience of specifying features like cointegration and error correction.
``var_model`` directly specifies a VAR, while ``trend_component_model`` allows to define
a trend target to which the endogenous variables may be attracted in the long-run
(i.e. an error correction model).
.. command:: var_model (OPTIONS...);
|br| Picks equations in the ``model`` block to form a VAR model. This model can
be used as an auxiliary model in ``var_expectation_model`` or
``pac_model``. It must be of the following form:
.. math ::
Y_t = \mathbf{c} + \sum_{i=1}^p A_i Y_{t-i} + \varepsilon_t
or
.. math ::
A_0 Y_t = \mathbf{c} + \sum_{i=1}^p A_i Y_{t-i} + \varepsilon_t
if the VAR is structural (see below), where :math:`Y_t` and
:math:`\varepsilon_t` are :math:`n\times 1` vectors, :math:`\mathbf{c}` is a
:math:`n\times 1` vector of parameters, :math:`A_i` (:math:`i=0,\ldots,p`)
are :math:`n\times n` matrices of parameters, and :math:`A_0` is non
singular square matrix. Vector :math:`\mathbf{c}` and matrices :math:`A_i`
(:math:`i=0,\ldots,p`) are set by Dynare by parsing the equations in the
``model`` block. Then, Dynare builds a VAR(1)-companion form model for
:math:`\mathcal{Y}_t = (1, Y_t, \ldots, Y_{t-p+1})'` as:
.. math ::
\begin{pmatrix}
1\\
Y_t\\
Y_{t-1}\\
\vdots\\
\vdots\\
Y_{t-p+1}
\end{pmatrix}
=
\underbrace{
\begin{pmatrix}
1 & 0_n' & \ldots & \ldots & \ldots & 0_n'\\
\mathbf{c} & A_1 & A_2 & \ldots & \ldots & A_p\\
0_n & I_n & O_n & \ldots & \ldots & O_n\\
0_n & O_n & I_n & O_n & \ldots & O_n\\
\vdots & O_n & \ddots & \ddots & \ddots & \vdots \\
0_n & O_n & \ldots & O_n & I_n & O_n
\end{pmatrix}}_{\mathcal{C}}
\begin{pmatrix}
1\\
Y_{t-1}\\
Y_{t-2}\\
\vdots\\
\vdots\\
Y_{t-p}
\end{pmatrix}
+
\underbrace{
\begin{pmatrix}
0\\
\varepsilon_t\\
0_n\\
\vdots\\
\vdots\\
0_n
\end{pmatrix}}_{\mathcal{\epsilon}_t}
assuming that we are dealing with a reduced form VAR (otherwise, the right-hand
side would additionally be premultiplied by :math:`A_0^{-1}.` to obtain the reduced
for representation). If the VAR does not have a constant, we remove the first
line of the system and the first column of the companion matrix
:math:`\mathcal{C}.` Dynare only saves the companion matrix, since that
is the only information required to compute the expectations.
.. matvar:: oo_.var.MODEL_NAME.CompanionMatrix
Reduced form companion matrix of the ``var_model``.
*Options*
.. option:: model_name = STRING
Name of the VAR model, which will be referenced in ``var_expectation_model`` or ``pac_model``
as an ``auxiliary_model``. Needs to be a valid MATLAB field name.
.. option:: eqtags = [QUOTED_STRING[, QUOTED_STRING[, ...]]]
List of equations in the ``model`` block (referenced using the equation tag ``name``) used to build the VAR model.
.. option:: structural
By default the VAR model is not structural, *i.e.* each equation must
contain exactly one contemporaneous variable (on the LHS). If the
``structural`` option is provided then any variable defined in the system
can appear at time :math:`t` in each equation. Internally Dynare will
rewrite this model as a reduced form VAR (by inverting the implied matrix :math:`A_0`).
*Example*
::
var_model(model_name = toto, eqtags = [ 'X', 'Y', 'Z' ]);
model;
[ name = 'X' ]
x = a*x(-1) + b*x(-2) + c*z(-2) + e_x;
[ name = 'Z' ]
z = f*z(-1) + e_z;
[ name = 'Y' ]
y = d*y(-2) + e*z(-1) + e_y;
end;
.. command:: trend_component_model (OPTIONS...);
|br| Picks equations in the model block to form a trend component model. This
model can be used as an auxiliary model in ``var_expectation_model`` or
``pac_model``. It must be of the following form:
.. math ::
\begin{cases}
\Delta X_t &= A_0 (X_{t-1}-C_0 Z_{t-1}) + \sum_{i=1}^p A_i \Delta X_{t-i} + \varepsilon_t\\
Z_t &= Z_{t-1} + \eta_t
\end{cases}
where :math:`X_t` and :math:`Z_t` are :math:`n\times 1` and :math:`m\times 1` vectors
of endogenous variables. :math:`Z_t` defines the trend target to whose linear combination
:math:`C_0 Z_t` the endogenous variables :math:`X_t` will be attracted, provided the implied
error correction matrix :math:`A_0` is negative definite. :math:`\varepsilon_t` and :math:`\eta_t` are :math:`n\times 1`
and :math:`m\times 1` vectors of exogenous variables, :math:`A_i` (:math:`i=0,\ldots,p`)
are :math:`n\times n` matrices of parameters, and :math:`C_0` is a :math:`n\times m` matrix.
This model can also be cast into a VAR(1) model by first rewriting it in levels. Let
:math:`Y_t = (X_t',Z_t')'` and :math:`\zeta_t = (\varepsilon_t',\eta_t')'`. Then we have:
.. math ::
Y_t = \sum_{i=1}^{p+1} B_i Y_{t-i} + \zeta_t
with
.. math ::
B_1 = \begin{pmatrix}
I_n+A_0+A_1 & -\Lambda\\
O_{m,n} & I_m
\end{pmatrix}
where :math:`\Lambda = A_0C_0`,
.. math ::
B_i = \begin{pmatrix}
A_i-A_{i-1} & O_{n,m}\\
O_{m,n} & O_m
\end{pmatrix}
for :math:`i=2,\ldots,p`, and
.. math ::
B_{p+1} = \begin{pmatrix}
-A_p & O_{n,m}\\
O_{m,n} & O_m
\end{pmatrix}
This VAR(p+1) in levels can again be rewritten as a VAR(1)-companion model form.
.. matvar:: oo_.trend_component.MODEL_NAME.CompanionMatrix
Reduced form companion matrix of the ``trend_component_model``.
*Options*
.. option:: model_name = STRING
Name of the trend component model, will be referenced in ``var_expectation_model``
or ``pac_model`` as an ``auxiliary_model``. Needs to be a valid MATLAB field name.
.. option:: eqtags = [QUOTED_STRING[, QUOTED_STRING[, ...]]]
List of equations in the ``model`` block (referenced using the equation tag ``name``) used to build the trend component model.
.. option:: targets = [QUOTED_STRING[, QUOTED_STRING[, ...]]]
List of targets, corresponding to the variables in vector :math:`Z_t`, referenced using the equation tag ``name``) of the associated equation in the ``model`` block. ``target`` must be a subset of ``eqtags``.
*Example*
::
trend_component_model(model_name=toto, eqtags=['eq:x1', 'eq:x2', 'eq:x1bar', 'eq:x2bar'], targets=['eq:x1bar', 'eq:x2bar']);
model;
[name='eq:x1']
diff(x1) = a_x1_0*(x1(-1)-x1bar(-1))+a_x1_0_*(x2(-1)-x2bar(-1)) + a_x1_1*diff(x1(-1)) + a_x1_2*diff(x1(-2)) + + a_x1_x2_1*diff(x2(-1)) + a_x1_x2_2*diff(x2(-2)) + ex1;
[name='eq:x2']
diff(x2) = a_x2_0*(x2(-1)-x2bar(-1)) + a_x2_1*diff(x1(-1)) + a_x2_2*diff(x1(-2)) + a_x2_x1_1*diff(x2(-1)) + a_x2_x1_2*diff(x2(-2)) + ex2;
[name='eq:x1bar']
x1bar = x1bar(-1) + ex1bar;
[name='eq:x2bar']
x2bar = x2bar(-1) + ex2bar;
end;
VAR expectations
----------------
Suppose we wish to forecast a variable :math:`y_t` and that
:math:`y_t` is an element of vector of variables :math:`\mathcal{Y}_t` whose law of
motion is described by a VAR(1) model :math:`\mathcal{Y}_t =
\mathcal{C}\mathcal{Y}_{t-1}+\epsilon_t`. More generally, :math:`y_t` may
be a linear combination of the scalar variables in
:math:`\mathcal{Y}_t`. Let the vector :math:`\alpha` be such that
:math:`y_t = \alpha'\mathcal{Y}_t` (:math:`\alpha` is a selection
vector if :math:`y_t` is a variable in :math:`\mathcal{Y}_t`, *i.e.* a
column of an identity matrix, or an arbitrary vector defining the
weights of a linear combination). Then the best prediction, in the sense of the minimisation of the RMSE, for
:math:`y_{t+h}` given the information set at :math:`t-\tau` (which we assume to include all observables
up to time :math:`t-\tau`, :math:`\mathcal{Y}_{\underline{t-\tau}}`) is:
.. math ::
y_{t+h|t-\tau} = \mathbb E[y_{t+h}|\mathcal{Y}_{\underline{t-\tau}}] = \alpha\mathcal{C}^{h+\tau} \mathcal{Y}_{t-\tau}
In a semi-structural model, variables appearing in :math:`t+h` (*e.g.*
the expected output gap in a dynamic IS curve or expected inflation in a
(New Keynesian) Phillips curve) will be replaced by the expectation implied by an auxiliary VAR
model. Another use case is for the computation of permanent
incomes. Typically, consumption will depend on something like:
.. math ::
\sum_{h=0}^{\infty} \beta^h y_{t+h|t-\tau}
Assuming that $0<\beta<1$ and knowing the limit of geometric series, the conditional expectation of this variable can be evaluated based on the same auxiliary model:
.. math ::
\mathbb E \left[\sum_{h=0}^{\infty} \beta^h y_{t+h}\Biggl| \mathcal{Y}_{\underline{t-\tau}}\right] = \alpha \mathcal{C}^\tau(I-\beta\mathcal{C})^{-1}\mathcal{Y}_{t-\tau}
More generally, it is possible to consider finite discounted sums.
.. command:: var_expectation_model (OPTIONS...);
|br| Declares a model used to forecast an endogenous variable or linear
combination of variables in :math:`t+h`. More generally, the same model can
be used to forecast the discounted flow of a variable or a linear expression of variables:
.. math ::
\sum_{h=a}^b \mathbb \beta^{h-\tau}\mathbb E[y_{t+h}|\mathcal{Y}_{\underline{t-\tau}}]
where :math:`(a,b)\in\mathbb N^2` with :math:`a<b`, :math:`\beta\in(0,1]` is a discount factor,
and :math:`\tau` is a finite positive integer.
*Options*
.. option:: model_name = STRING
Name of the VAR based expectation model, which will be referenced in the ``model`` block.
.. option:: auxiliary_model = STRING
Name of the associated auxiliary model, defined with ``var_model`` or ``trend_component_model``.
.. option:: expression = VARIABLE_NAME | EXPRESSION
Name of the variable or expression (linear combination of variables) to be expected.
.. option:: discount = PARAMETER_NAME | DOUBLE
Discount factor (:math:`\beta`).
.. option:: horizon = INTEGER | [INTEGER:INTEGER]
The upper limit :math:`b` of the horizon :math:`h` (in which case :math:`a=0`), or range of periods
:math:`a:b` over which the discounted sum is computed (the upper bound can be ``Inf``).
.. option:: time_shift = INTEGER
Shift of the information set (:math:`\tau`), default value is 0.
.. operator:: var_expectation (NAME_OF_VAR_EXPECTATION_MODEL);
|br| This operator is used instead of a leaded variable, e.g. ``X(1)``, in the
``model`` block to substitute a model-consistent forecast with a forecast
based on a VAR model.
*Example*
::
var_model(model_name=toto, eqtags=['X', 'Y', 'Z']);
var_expectation_model(model_name=varexp, expression=x, auxiliary_model_name=toto, horizon=1, discount=beta);
model;
[name='X']
x = a*x(-1) + b*x(-2) + c*z(-2) + e_x;
[name='Z']
z = f*z(-1) + e_z;
[name='Y']
y = d*y(-2) + e*z(-1) + e_y;
foo = .5*foo(-1) + var_expectation(varexp);
end;
In this example ``var_expectation(varexp)`` stands for the one step ahead expectation of ``x``, as a replacement for ``x(1)``.
.. matcomm:: var_expectation.initialize(NAME_OF_VAR_EXPECTATION_MODEL);
|br| Initialise the ``var_expectation_model`` by building the companion matrix
of the associated auxiliary ``var_model``. Needs to be executed before attempts to simulate or
estimate the model.
|br|
.. matcomm:: var_expectation.update(NAME_OF_VAR_EXPECTATION_MODEL);
|br| Update/compute the reduced form parameters of ``var_expectation_model``. Needs to be executed
before attempts to simulate or estimate the model and requires the auxiliary ``var_model`` to have
previously been initialized.
|br|
*Example (continued)*
::
var_expectation.initialize('varexp');
var_expectation.update('varexp');
.. warning:: Changes to the parameters of the underlying auxiliary ``var_model`` require calls to
``var_expectation.initialize`` and ``var_expectation.update`` to become effective. Changes to the
``var_expectation_model`` or its associated parameters require a call to ``var_expectation.update``.
PAC equation
------------
In its simplest form, a PAC equation breaks down changes in a variable of
interest :math:`y` into three contributions: (*i*) the lagged deviation from a
target :math:`y^{\star}`, (*ii*) the lagged changes in the variable :math:`y`,
and (*iii*) the expected changes in the target :math:`y^{\star}`:
.. math ::
\Delta y_t = a_0(y_{t-1}^{\star}-y_{t-1}) + \sum_{i=1}^{m-1} a_i \Delta y_{t-i} + \sum_{i=0}^{\infty} d_i \Delta y^{\star}_{t+i} +\varepsilon_t
*Brayton et alii (2000)* shows how such an equation can be derived from the
minimisation of a quadratic cost function penalising expected deviations from
the target and non-smoothness of :math:`y`, where future costs are discounted
(with discount factor :math:`\beta`). They also show that the parameters
:math:`(d_i)_{i\in\mathbb N}` are non-linear functions of the :math:`m`
parameters :math:`a_i` and the discount factor :math:`\beta`. To simulate or
estimate this equation we need to figure out how to determine the expected
changes of the target. This can be done as in the previous section using VAR
based expectations, or considering model consistent expectations (MCE).
To ensure that the endogenous variable :math:`y` is equal to its target
:math:`y^{\star}` in the (deterministic) long run, *i.e.* that the error
correction term is zero in the long run, we can optionally add a growth neutrality
correction to this equation. Suppose that :math:`g` is the long run growth rate, for
:math:`y` and :math:`y^{\star}`, then in the long run (assuming that the data
are in logs) we must have:
.. math ::
g = a_0(y^{\star}_{\infty}-y_{\infty}) + g\sum_{i=1}^{m-1} a_i + g\sum_{i=0}^{\infty} d_i
\Leftrightarrow a_0(y^{\star}_{\infty}-y_{\infty}) = \left(1-\sum_{i=1}^{m-1} a_i-\sum_{i=0}^{\infty} d_i\right) g
Unless additional restrictions are placed on the coefficients
:math:`(a_i)_{i=0}^{m-1}`, i.e. on the form of the minimised cost function, there is
no reason for the right-hand side to be zero. Instead, we can optionally add the
right hand side to the PAC equation, to ensure that the error correction term is
asymptotically zero.
The PAC equations can be generalised by adding exogenous variables. This can be
done in two, non exclusive, manners. We can replace the PAC equation by a convex
combination of the original PAC equation (derived from an optimisation program)
and a linear expression involving exogenous variables (referred as the rule of thumb part as
opposed to the part derived from the minimisation of a cost function; not to be confused with
exogenous shocks):
.. math ::
\Delta y_t = \lambda \left(a_0(y_{t-1}^{\star}-y_{t-1}) + \sum_{i=1}^{m-1} a_i \Delta y_{t-i} + \sum_{i=0}^{\infty} d_i \Delta y^{\star}_{t+i}\right) + (1-\lambda)\gamma'X_t +\varepsilon_t
where :math:`\lambda\in[0,1]` is the weight of the pure PAC equation,
:math:`\gamma` is a :math:`k\times 1` vector of parameters, and :math:`X_t` a
:math:`k\times 1` vector of variables in the rule of thumb part. Or we can
simply add the exogenous variables to the PAC equation (without the weight
:math:`\lambda`):
.. math ::
\Delta y_t = a_0(y_{t-1}^{\star}-y_{t-1}) + \sum_{i=1}^{m-1} a_i \Delta y_{t-i} + \sum_{i=0}^{\infty} d_i \Delta y^{\star}_{t+i} + \gamma'X_t +\varepsilon_t
.. command:: pac_model (OPTIONS...);
|br| Declares a PAC model. A ``.mod`` file can have more than one PAC model or PAC equation, but each PAC equation must be associated to a different PAC model.
*Options*
.. option:: model_name = STRING
Name of the PAC model, will be referenced in the ``model`` block.
.. option:: auxiliary_model = STRING
Name of the associated auxiliary model, defined with ``var_model`` or
``trend_component_model``, to compute the VAR based expectations for the
expected changes in the target, *i.e.* to evaluate
:math:`\sum_{i=0}^{\infty} d_i \Delta y^{\star}_{t+i}`. The infinite sum
will then be replaced by a linear combination of the variables involved in
the companion representation of the auxiliary model. The weights defining
the linear combination are nonlinear functions of the
:math:`(a_i)_{i=0}^{m-1}` coefficients in the PAC equation. This option is
not mandatory, if absent Dynare understands that the expected changes of the
target have to be computed under the MCE assumption. This is done by
rewriting recursively the infinite sum as shown in equation 10 of *Brayton et alii (2000)*.
.. option:: discount = PARAMETER_NAME | DOUBLE
Discount factor (:math:`\beta`) for future expected costs appearing in the
definition of the cost function.
.. option:: growth = PARAMETER_NAME | VARIABLE_NAME | EXPRESSION | DOUBLE
If present a growth neutrality correction is added to the PAC equation. The
user must ensure that the provided value (or long term level if a variable
or expression is given) is consistent with the asymptotic growth rate of the
endogenous variable.
.. operator:: pac_expectation (NAME_OF_PAC_MODEL);
|br| This operator is used instead of the infinite sum,
:math:`\sum_{i=0}^{\infty} d_i \Delta y^{\star}_{t+i}`, in a PAC equation
defined in the ``model`` block. Depending on the assumption regarding the
formation of expectations, it will be replaced by a linear combination of
the variables involved in the companion representation of the auxiliary model
or by a recursive forward equation.
|br|
.. matcomm:: pac.initialize(NAME_OF_PAC_MODEL);
.. matcomm:: pac.update(NAME_OF_PAC_MODEL);
|br| Same as in the previous section for the VAR expectations, initialise the
PAC model, by building the companion matrix of the auxiliary model, and
computes the reduced form parameters of the PAC equation (the weights in the
linear combination of the variables involved in the companion representation
of the auxiliary model, or the parameters of the recursive representation of
the infinite sum in the MCE case).
*Example*
::
trend_component_model(model_name=toto, eqtags=['eq:x1', 'eq:x2', 'eq:x1bar', 'eq:x2bar'], targets=['eq:x1bar', 'eq:x2bar']);
pac_model(auxiliary_model_name=toto, discount=beta, growth=diff(x1(-1)), model_name=pacman);
model;
[name='eq:y']
y = rho_1*y(-1) + rho_2*y(-2) + ey;
[name='eq:x1']
diff(x1) = a_x1_0*(x1(-1)-x1bar(-1)) + a_x1_1*diff(x1(-1)) + a_x1_2*diff(x1(-2)) + a_x1_x2_1*diff(x2(-1)) + a_x1_x2_2*diff(x2(-2)) + ex1;
[name='eq:x2']
diff(x2) = a_x2_0*(x2(-1)-x2bar(-1)) + a_x2_1*diff(x1(-1)) + a_x2_2*diff(x1(-2)) + a_x2_x1_1*diff(x2(-1)) + a_x2_x1_2*diff(x2(-2)) + ex2;
[name='eq:x1bar']
x1bar = x1bar(-1) + ex1bar;
[name='eq:x2bar']
x2bar = x2bar(-1) + ex2bar;
[name='zpac']
diff(z) = e_c_m*(x1(-1)-z(-1)) + c_z_1*diff(z(-1)) + c_z_2*diff(z(-2)) + pac_expectation(pacman) + ez;
end;
pac.initialize('pacman');
pac.update.expectation('pacman');
Estimation of a PAC equation
----------------------------
The PAC equation, introduced in the previous section, can be estimated. This
equation is nonlinear with respect to the estimated parameters
:math:`(a_i)_{i=0}^{m-1}`, since the reduced form parameters (in the computation
of the infinite sum) are nonlinear functions of the autoregressive parameters
and the error correction parameter. *Brayton et alii (2000)* shows how to
estimate the PAC equation by iterative OLS. Although this approach is
implemented in Dynare, mainly for comparison purposes, we also propose NLS
estimation, which is much preferable (asymptotic properties of NLS being more
solidly grounded).
Note that it is currently not feasible to estimate the PAC equation jointly with
the remaining parameters of the model using e.g. Bayesian techniques. Thus, estimation
of the PAC equation can only be conducted conditional on the values of the parameters
of the auxiliary model.
.. warning:: The estimation routines described below require the option
``json=compute`` be passed to the preprocessor (via the command line
or at the top of the ``.mod`` file, see :ref:`dyn-invoc`).
.. matcomm:: pac.estimate.nls(EQNAME, GUESS, DATA, RANGE[, ALGO]);
.. matcomm:: pac.estimate.iterative_ols(EQNAME, GUESS, DATA, RANGE);
|br| Trigger the NLS or iterative OLS estimation of a PAC
equation. ``EQNAME`` is a row char array designating the PAC
equation to be estimated (the PAC equation must have a name
specified with an equation tag). ``DATA`` is a ``dseries`` object
containing the data required for the estimation (*i.e.* data for
all the endogenous and exogenous variables in the equation). The
residual values of the PAC equation (which correspond to a defined
`varexo`) must also be a member of ``DATA``,
but filled with ``NaN`` values. ``RANGE`` is a ``dates`` object
defining the time span of the sample. ``ALGO`` is a row char array
used to select the method (or minimisation algorithm) for NLS.
Possible values are : ``'fmincon'``, ``'fminunc'``,
``'fminsearch'``, ``'lsqnonlin'``, ``'particleswarm'``,
``'csminwel'``, ``'simplex'``, ``'annealing'``, and
``'GaussNewton'``. The first four algorithms require the Mathworks
Optimisation toolbox. The fifth algorithm requires the Mathworks
Global Optimisation toolbox. When the optimisation algorithm
allows it, we impose constraints on the error correction
parameter, which must be positive and smaller than 1 (it the case
for ``'fmincon'``, ``'lsqnonlin'``, ``'particleswarm'``, and
``'annealing'``). The default optimisation algorithm is
``'csminwel'``. ``GUESS`` is a structure containing the initial
guess values for the estimated parameters. Each field is the name
of a parameter in the PAC equation and holds the initial guess for
this parameter. If some parameters are calibrated, then they
should not be members of the ``GUESS`` structure (and values have
to be provided in the ``.mod`` file before the call to the
estimation routine).
For the NLS routine the estimation results are displayed in a
table after the estimation. For both the NLS and iterative OLS
routines, the results are saved in ``oo_`` (under the fields
``nls`` or ``iterative_ols``). Also, the values of the parameters
are updated in ``M_.params``.
*Example (continued)*
::
// Set the initial guess for the estimated parameters
eparams.e_c_m = .9;
eparams.c_z_1 = .5;
eparams.c_z_2 = .2;
// Define the dataset used for estimation
edata = TrueData;
edata.ez = dseries(NaN); // Set to NaN the residual of the equation.
pac.estimate.nls('zpac', eparams, edata, 2005Q1:2005Q1+200, 'annealing');
.. warning:: The specification of `GUESS` and `DATA` involves the use of structures.
As such, their subfields will not be cleared across Dynare runs as the structures
stay in the workspace. Be careful to clear these structures from the memory
(e.g. within the ``.mod`` file) when e.g. changing which parameters are calibrated.
Displaying and saving results
=============================
 
......@@ -11854,7 +13818,7 @@ Macro directives
@#define w = [ "US", "EA" ] // String array
@#define u = [ 1, ["EA"] ] // Mixed array
@#define z = 3 + v[2] // Equals 5
@#define t = ("US" in w) // Equals 1 (true)
@#define t = ("US" in w) // Equals true
@#define f(x) = " " + x + y // Function `f` with argument `x`
// returns the string ' ' + x + 'US'
 
......@@ -12303,7 +14267,7 @@ Pass everything contained within the verbatim block to the
 
In order to force this behavior you can use the ``verbatim``
block. This is useful when the code you want passed to the
``<mod_file>.m`` file contains tokens recognized by the Dynare
driver file contains tokens recognized by the Dynare
preprocessor.
 
*Example*
......@@ -12312,7 +14276,7 @@ Pass everything contained within the verbatim block to the
 
verbatim;
% Anything contained in this block will be passed
% directly to the <modfile>.m file, including comments
% directly to the driver file, including comments
var = 1;
end;
 
......@@ -12449,7 +14413,7 @@ Misc commands
``<<M_.fname>>_latex_parameters.tex.`` The command writes the
values of the parameters currently stored. Thus, if parameters are
set or changed in the steady state computation, the command should
be called after a steady-command to make sure the parameters were
be called after a ``steady`` command to make sure the parameters were
correctly updated. The long names can be used to add parameter
descriptions. Requires the following LaTeX packages:
``longtable, booktabs``.
......@@ -12509,11 +14473,11 @@ Misc commands
very time-consuming, and use of this option may be
counter-productive in those cases.
 
.. [#f3] See option :ref:`conf_sig <confsig>` to change the size of
the HPD interval.
.. [#f3] See options :ref:`conf_sig <confsig>` and :opt:`mh_conf_sig <mh_conf_sig = DOUBLE>`
to change the size of the HPD interval.
 
.. [#f4] See option :ref:`conf_sig <confsig>` to change the size of
the HPD interval.
.. [#f4] See options :ref:`conf_sig <confsig>` () and :opt:`mh_conf_sig <mh_conf_sig = DOUBLE>`
to change the size of the HPD interval.
 
.. [#f5] When the shocks are correlated, it is the decomposition of
orthogonalized shocks via Cholesky decomposition according to
......
......@@ -15,7 +15,7 @@ class and methods for dates. Below, you will first find the class and
methods used for creating and dealing with dates and then the class
used for using time series. Dynare also provides an interface to the
X-13 ARIMA-SEATS seasonal adjustment program produced, distributed, and
maintained by the US Census Bureau.
maintained by the U.S. Census Bureau (2020).
Dates
......@@ -1025,8 +1025,9 @@ The dseries class
``.xls/.xlsx`` (Octave only supports ``.xlsx`` files and the
`io <https://octave.sourceforge.io/io/>`__ package from
Octave-Forge must be installed). The extension of the file
should be explicitly provided. A typical ``.m`` file will have
the following form::
should be explicitly provided.
A typical ``.m`` file will have the following form::
FREQ__ = 4;
INIT__ = '1994Q3';
......@@ -1051,6 +1052,12 @@ The dseries class
typically usefull if ``INIT__`` is not provided in the data
file.
If an ``.xlsx`` file is used, the first row should be a header
containing the variable names. The first column may contain date
information that must correspond to a valid date format recognized
by Dynare. If such date information is specified in the first column,
its header name must be left empty.
.. construct:: dseries (DATA_MATRIX[,INITIAL_DATE[,LIST_OF_NAMES[,TEX_NAMES]]])
dseries (DATA_MATRIX[,RANGE_OF_DATES[,LIST_OF_NAMES[,TEX_NAMES]]])
......@@ -1512,7 +1519,7 @@ The dseries class
.. dseriesmethod:: B = detrend (A, m)
dentrend_ (A, m)
detrend_ (A, m)
|br| Detrends ``dseries`` object ``A`` with a fitted
polynomial of order ``m``. Note that each variable is
......@@ -2939,8 +2946,8 @@ X-13 ARIMA-SEATS interface
|br| The x13 class provides a method for each X-13 command as
documented in the X-13 ARIMA-SEATS reference manual (`x11`,
`automdl`, `estimate`, ...), options can then be passed by
key/value pairs. The ``x13`` class has 22 members:
`automdl`, `estimate`, ...). The respective options (see Chapter 7 of U.S. Census Bureau (2020))
can then be passed by key/value pairs. The ``x13`` class has 22 members:
:arg y: ``dseries`` object with a single variable.
:arg x: ``dseries`` object with an arbitrary number of variables (to be used in the REGRESSION block).
......@@ -2984,7 +2991,7 @@ X-13 ARIMA-SEATS interface
same time span.
The Following methods allow to set sequence of X-13 commands, write an `.spc` file and run the X-13 binary:
The following methods allow to set sequence of X-13 commands, write an `.spc` file, and run the X-13 binary:
.. x13method:: A = arima (A, key, value[, key, value[, [...]]])
......@@ -3019,7 +3026,10 @@ X-13 ARIMA-SEATS interface
Interface to the ``transform`` command, see the X-13
ARIMA-SEATS reference manual. All the options must be passed
by key/value pairs.
by key/value pairs. For example, the key/value pair ``function,log``
instructs the use of a multiplicative instead of an additive seasonal pattern,
while ``function,auto`` triggers an automatic selection between the two based
on their fit.
.. x13method:: A = outlier (A, key, value[, key, value[, [...]]])
......@@ -3127,6 +3137,11 @@ X-13 ARIMA-SEATS interface
``A.results``. When it makes sense these results are saved in
``dseries`` objects (*e.g.* for forecasts or filtered variables).
.. x13method:: clean (A)
Removes the temporary files created by an x13 run that store the intermediate
results. This method allows keeping the main folder clean but will also
delete potentially important debugging information.
*Example*
......@@ -3144,6 +3159,55 @@ X-13 ARIMA-SEATS interface
>> o.run();
The above example shows a run of X13 with various commands an options specified.
*Example*
::
% 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960
y = [112 115 145 171 196 204 242 284 315 340 360 417 ... % Jan
118 126 150 180 196 188 233 277 301 318 342 391 ... % Feb
132 141 178 193 236 235 267 317 356 362 406 419 ... % Mar
129 135 163 181 235 227 269 313 348 348 396 461 ... % Apr
121 125 172 183 229 234 270 318 355 363 420 472 ... % May
135 149 178 218 243 264 315 374 422 435 472 535 ... % Jun
148 170 199 230 264 302 364 413 465 491 548 622 ... % Jul
148 170 199 242 272 293 347 405 467 505 559 606 ... % Aug
136 158 184 209 237 259 312 355 404 404 463 508 ... % Sep
119 133 162 191 211 229 274 306 347 359 407 461 ... % Oct
104 114 146 172 180 203 237 271 305 310 362 390 ... % Nov
118 140 166 194 201 229 278 306 336 337 405 432 ]'; % Dec
ts = dseries(y,'1949M1');
o = x13(ts);
o.transform('function','auto');
o.automdl('savelog','all');
o.x11('save','(d11 d10)');
o.run();
o.clean();
y_SA=o.results.d11;
y_seasonal_pattern=o.results.d10;
figure('Name','Comparison raw data and SAed data');
plot(ts.dates,log(o.y.data),ts.dates,log(y_SA.data),ts.dates,log(y_seasonal_pattern.data))
The above example shows how to remove a seasonal pattern from a time series.
``o.transform('function','auto')`` instructs the subsequent ``o.automdl()`` command
to check whether an additional or a multiplicative pattern fits the data better (the latter is
the case in the current example). The ``o.automdl('savelog','all')`` automatically selects a fitting
ARIMA model and saves all relevant output to the .log-file. The ``o.x11('save','(d11, d10)')`` instructs
``x11`` to save both the final seasonally adjusted series ``d11`` and the final seasonal factor ``d10``
into ``dseries`` with the respective names in the output structure ``o.results``. ``o.clean()`` removes the
temporary files created by ``o.run()``. Among these are the ``.log``-file storing
summary information, the ``.err``-file storing information on problems encountered,
the ``.out``-file storing the raw output, and the `.spc`-file storing the specification for the `x11` run.
There may be further files depending on the output requested. The last part of the example reads out the
results and plots a comparison of the logged raw data and its log-additive decomposition into a
seasonal pattern and the seasonally adjusted series.
Miscellaneous
=============
......
......@@ -36,8 +36,8 @@ class DynareLexer(RegexLexer):
"dynare","var","varexo","varexo_det","parameters","change_type","model_local_variable",
"predetermined_variables","trend_var","log_trend_var","external_function",
"write_latex_original_model","write_latex_dynamic_model",
"write_latex_static_model","resid","initval_file","histval_file","dsample",
"periods","values","corr","steady","check","model_diagnostics","model_info",
"write_latex_static_model","write_latex_steady_state_model","resid","initval_file","histval_file","dsample",
"periods","values","scales","corr","stderr","steady","check","model_diagnostics","model_info",
"print_bytecode_dynamic_model"," print_bytecode_static_model",
"perfect_foresight_setup","perfect_foresight_solver","simul","stoch_simul",
"extended_path","varobs","estimation","unit_root_vars","bvar_density",
......@@ -52,13 +52,15 @@ class DynareLexer(RegexLexer):
"save_params_and_steady_state","load_params_and_steady_state",
"dynare_version","write_latex_definitions","write_latex_parameter_table",
"write_latex_prior_table","collect_latex_files","prior_function",
"posterior_function","generate_trace_plots","evaluate_planner_objective")
"posterior_function","generate_trace_plots","evaluate_planner_objective",
"occbin_setup","occbin_solver","occbin_write_regimes","occbin_graph","method_of_moments",
"var_model","trend_component_model","var_expectation_model","pac_model")
report_commands = ("report","addPage","addSection","addGraph","addTable",
"addSeries","addParagraph","addVspace","write","compile")
operators = (
"STEADY_STATE","EXPECTATION")
"STEADY_STATE","EXPECTATION","var_expectation","pac_expectation")
macro_dirs = (
"@#includepath", "@#include", "@#define", "@#if",
......@@ -80,6 +82,7 @@ class DynareLexer(RegexLexer):
'shock_groups','conditional_forecast_paths','optim_weights',
'osr_params_bounds','ramsey_constraints','irf_calibration',
'moment_calibration','identification','svar_identification',
'matched_moments','occbin_constraints','surprise','overwrite','bind','relax',
'verbatim','end','node','cluster','paths','hooks'), prefix=r'\b', suffix=r'\s*\b'),Keyword.Reserved),
# FIXME: Commands following multiline comments are not highlighted properly.
......
% ----------------------------------------------------------------
% AMS-LaTeX Paper ************************************************
% **** -----------------------------------------------------------
\documentclass[12pt,a4paper,pdftex,nofootinbib]{article}
\usepackage[cp1252]{inputenc}
\documentclass[12pt,a4paper,pdftex]{article}
\usepackage[margin=2.5cm]{geometry}
\usepackage[utf8]{inputenc}
\usepackage{amssymb,amsmath}
\usepackage[pdftex]{graphicx}
\usepackage{graphicx}
\usepackage{epstopdf}
\usepackage{natbib}
\usepackage{verbatim}
\usepackage[pdftex]{color}
\usepackage{xcolor}
\usepackage{psfrag}
\usepackage{setspace}
\usepackage{rotating}
......@@ -49,6 +50,15 @@
\def \supp{{\rm supp}}
\def \var{{\rm var}}
\usepackage[pdfpagelabels]{hyperref}
\hypersetup{
pdfproducer = {LaTeX},
colorlinks,
linkcolor=blue,
filecolor=yellow,
urlcolor=green,
citecolor=green}
% ----------------------------------------------------------------
\begin{document}
......@@ -349,7 +359,7 @@ Finally, the DYNARE command line options are:
\item \verb"parallel": trigger the parallel computation using the first cluster specified in config file
\item \verb"parallel=<clustername>": trigger the parallel computation, using the given cluster
\item \verb"parallel_slave_open_mode": use the leaveSlaveOpen mode in the cluster
\item \verb"parallel_test": just test the cluster, don’t actually run the MOD file
\item \verb"parallel_test": just test the cluster, don't actually run the MOD file
\end{itemize}
......@@ -828,7 +838,7 @@ The modified \verb"random_walk_metropolis_hastings.m" is therefore:
\noindent\begin{tabular}[b]{| p{\linewidth} |}
\hline
\begin{verbatim}
function random_walk_metropolis_hastings(TargetFun,ProposalFun,…,varargin)
function random_walk_metropolis_hastings(TargetFun,ProposalFun,\ldots,varargin)
[...]
% here we wrap all local variables needed by the <*>_core function
localVars = struct('TargetFun', TargetFun, ...
......@@ -970,11 +980,11 @@ On the other hand, under the parallel implementation, a parallel monitoring plot
\section{Parallel DYNARE: testing}
We checked the new parallel platform for DYNARE performing a number of tests, using different models and computer architectures. We present here all tests performed with Windows XP/MATLAB. However, similar tests were performed successfully under Linux/Ubuntu environment.
In the Bayesian estimation of DSGE models with DYNARE, most of the computing time is devoted to the posterior parameter estimation with the Metropolis algorithm. The first and second tests are therefore focused on the parallelization of the Random Walking Metropolis Hastings algorithm (Sections \ref{s:test1}-\ref{s:test2}). In addition, further tests (Sections \ref{s:test3}-\ref{s:test4}) are devoted to test all the parallelized functions in DYNARE. Finally, we compare the two parallel implementations of the Metropolis Hastings algorithms, available in DYNARE: the Independent and the Random Walk (Section \ref{s:test5}).
In the Bayesian estimation of DSGE models with DYNARE, most of the computing time is devoted to the posterior parameter estimation with the Metropolis algorithm. The first and second tests are therefore focused on the parallelization of the Random Walking Metropolis Hastings algorithm (Sections \ref{s:test1}-\ref{s:test2}). In addition, further tests (Sections \ref{s:test3}-\ref{s:test4}) are devoted to test all the parallelized functions in DYNARE. %Finally, we compare the two parallel implementations of the Metropolis Hastings algorithms, available in DYNARE: the Independent and the Random Walk (Section \ref{s:test5}).
\subsection{Test 1.}\label{s:test1}
The main goal here was to evaluate the parallel package on a \emph{fixed hardware platform} and using chains of \emph{variable length}. The model used for testing is a modification of \cite{Hradisky_etal_2006}. This is a small scale open economy DSGE model with 6 observed variables, 6 endogenous variables and 19 parameters to be estimated.
We estimated the model on a bi-processor machine (Fujitsu Siemens, Celsius R630) powered with an Intel(R) Xeon(TM) CPU 2.80GHz Hyper Treading Technology; first with the original serial Metropolis and subsequently using the parallel solution, to take advantage of the two processors technology. We ran chains of increasing length: 2500, 5000, 10,000, 50,000, 100,000, 250,000, 1,000,000.
We estimated the model on a bi-processor machine (Fujitsu Siemens, Celsius R630) powered with an Intel\textsuperscript{\textregistered} Xeon\texttrademark CPU 2.80GHz Hyper Treading Technology; first with the original serial Metropolis and subsequently using the parallel solution, to take advantage of the two processors technology. We ran chains of increasing length: 2500, 5000, 10,000, 50,000, 100,000, 250,000, 1,000,000.
\begin{figure}[!ht]
\begin{centering}
......@@ -997,8 +1007,8 @@ Overall results are given in Figure \ref{fig:test_time_comp}, showing the comput
The scope of the second test was to verify if results were robust over different hardware platforms.
We estimated the model with chain lengths of 1,000,000 runs on the following hardware platforms:
\begin{itemize}
\item Single processor machine: Intel(R) Pentium4(R) CPU 3.40GHz with Hyper Treading Technology (Fujitsu-Siemens Scenic Esprimo);
\item Bi-processor machine: two CPU's Intel(R) Xeon(TM) 2.80GHz Hyper Treading Technology (Fujitsu-Siemens, Celsius R630);
\item Single processor machine: Intel\textsuperscript{\textregistered} Pentium4\textsuperscript{\textregistered} CPU 3.40GHz with Hyper Treading Technology (Fujitsu-Siemens Scenic Esprimo);
\item Bi-processor machine: two CPU's Intel\textsuperscript{\textregistered} Xeon\texttrademark 2.80GHz Hyper Treading Technology (Fujitsu-Siemens, Celsius R630);
\item Dual core machine: Intel Centrino T2500 2.00GHz Dual Core (Fujitsu-Siemens, LifeBook S Series).
\end{itemize}
......@@ -1042,7 +1052,7 @@ Unplugged network cable. &
Given the excellent results reported above, we have parallelized many other DYNARE functions. This implies that parallel instances can be invoked many times during a single DYNARE session. Under the basic parallel toolbox implementation, that we call the `Open/Close' strategy, this implies that MATLAB instances are opened and closed many times by system calls, possibly slowing down the computation, specially for `entry-level' computer resources. As mentioned before, this suggested to implement an alternative strategy for the parallel toolbox, that we call the `Always-Open' strategy, where the slave MATLAB threads, once opened, stay alive and wait for new tasks assigned by the master until the full DYNARE procedure is completed. We show next the tests of these latest implementations.
\subsection{Test 3}\label{s:test3}
In this Section we use the \cite{Lubik2003} model as test function\footnote{The \cite{Lubik2003} model is also selected as the `official' test model for the parallel toolbox in DYNARE.} and a very simple computer class, quite diffuse nowadays: Netbook personal Computer. In particular we used the Dell Mini 10 with Processor Intel® Atom™ Z520 (1,33 GHz, 533 MHz), 1 GB di RAM (with Hyper-trading). First, we tested the computational gain of running a full Bayesian estimation: Metropolis (two parallel chains), MCMC diagnostics, posterior IRF's and filtered, smoothed, forecasts, etc. In other words, we designed DYNARE sessions that invoke all parallelized functions. Results are shown in Figures \ref{fig:netbook_complete_openclose}-\ref{fig:netbook_partial_openclose}.
In this Section we use the \cite{Lubik2003} model as test function\footnote{The \cite{Lubik2003} model is also selected as the `official' test model for the parallel toolbox in DYNARE.} and a very simple computer class, quite diffuse nowadays: Netbook personal Computer. In particular we used the Dell Mini 10 with Processor Intel\textsuperscript{\textregistered} Atom\texttrademark Z520 (1,33 GHz, 533 MHz), 1 GB di RAM (with Hyper-trading). First, we tested the computational gain of running a full Bayesian estimation: Metropolis (two parallel chains), MCMC diagnostics, posterior IRF's and filtered, smoothed, forecasts, etc. In other words, we designed DYNARE sessions that invoke all parallelized functions. Results are shown in Figures \ref{fig:netbook_complete_openclose}-\ref{fig:netbook_partial_openclose}.
\begin{figure}[p]
\begin{centering}
% Requires \usepackage{graphicx}
......@@ -1143,49 +1153,49 @@ The methodology identified for parallelizing MATLAB codes within DYNARE proved t
\begin{figure}
\begin{centering}
% Requires \usepackage{graphicx}
\epsfxsize=250pt \epsfbox{RWMH_quest1_PriorsAndPosteriors1Comp.pdf}
\epsfxsize=300pt \epsfbox{RWMH_quest1_PriorsAndPosteriors1Comp.pdf}
\caption{Prior (grey lines) and posterior density of estimated parameters (black = 100,000 runs; red = 1,000,000 runs) using the RWMH algorithm \citep[QUEST III model][]{Ratto_et_al_EconModel2009}.}\label{fig:quest_RWMH_comp1}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
% Requires \usepackage{graphicx}
\epsfxsize=250pt \epsfbox{RWMH_quest1_PriorsAndPosteriors2Comp.pdf}
\epsfxsize=300pt \epsfbox{RWMH_quest1_PriorsAndPosteriors2Comp.pdf}
\caption{Prior (grey lines) and posterior density of estimated parameters (black = 100,000 runs; red = 1,000,000 runs) using the RWMH algorithm \citep[QUEST III model][]{Ratto_et_al_EconModel2009}.}\label{fig:quest_RWMH_comp2}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
% Requires \usepackage{graphicx}
\epsfxsize=250pt \epsfbox{RWMH_quest1_PriorsAndPosteriors3Comp.pdf}
\epsfxsize=300pt \epsfbox{RWMH_quest1_PriorsAndPosteriors3Comp.pdf}
\caption{Prior (grey lines) and posterior density of estimated parameters (black = 100,000 runs; red = 1,000,000 runs) using the RWMH algorithm \citep[QUEST III model][]{Ratto_et_al_EconModel2009}.}\label{fig:quest_RWMH_comp3}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
% Requires \usepackage{graphicx}
\epsfxsize=250pt \epsfbox{RWMH_quest1_PriorsAndPosteriors4Comp.pdf}
\epsfxsize=300pt \epsfbox{RWMH_quest1_PriorsAndPosteriors4Comp.pdf}
\caption{Prior (grey lines) and posterior density of estimated parameters (black = 100,000 runs; red = 1,000,000 runs) using the RWMH algorithm \citep[QUEST III model][]{Ratto_et_al_EconModel2009}.}\label{fig:quest_RWMH_comp4}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
% Requires \usepackage{graphicx}
\epsfxsize=250pt \epsfbox{RWMH_quest1_PriorsAndPosteriors5Comp.pdf}
\epsfxsize=300pt \epsfbox{RWMH_quest1_PriorsAndPosteriors5Comp.pdf}
\caption{Prior (grey lines) and posterior density of estimated parameters (black = 100,000 runs; red = 1,000,000 runs) using the RWMH algorithm \citep[QUEST III model][]{Ratto_et_al_EconModel2009}.}\label{fig:quest_RWMH_comp5}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
% Requires \usepackage{graphicx}
\epsfxsize=250pt \epsfbox{RWMH_quest1_PriorsAndPosteriors6Comp.pdf}
\epsfxsize=300pt \epsfbox{RWMH_quest1_PriorsAndPosteriors6Comp.pdf}
\caption{Prior (grey lines) and posterior density of estimated parameters (black = 100,000 runs; red = 1,000,000 runs) using the RWMH algorithm \citep[QUEST III model][]{Ratto_et_al_EconModel2009}.}\label{fig:quest_RWMH_comp6}
\end{centering}
\end{figure}
\begin{figure}
\begin{centering}
% Requires \usepackage{graphicx}
\epsfxsize=250pt \epsfbox{RWMH_quest1_PriorsAndPosteriors7Comp.pdf}
\epsfxsize=300pt \epsfbox{RWMH_quest1_PriorsAndPosteriors7Comp.pdf}
\caption{Prior (grey lines) and posterior density of estimated parameters (black = 100,000 runs; red = 1,000,000 runs) using the RWMH algorithm \citep[QUEST III model][]{Ratto_et_al_EconModel2009}.}\label{fig:quest_RWMH_comp7}
\end{centering}
\end{figure}
......
......@@ -42,6 +42,11 @@
% All examples suppose that the prefix is 'dyn' and that your_model.mat
% has been loaded into Matlab.
%
% You could e.g. use
% load('example1.mat');
% shocks=randn(size(dyn_shocks,1),1000)'*chol(dyn_vcov_exo);
% dynare_simul('example1',shocks')
%
% 1. response to permanent negative shock to the third exo var EPS3 for
% 100 periods
%
......@@ -168,9 +173,5 @@ end
seed = ceil(10000*rand(1,1));
% call dynare_simul_
[err,r]=dynare_simul_(order-1,nstat,npred,nboth,nforw,...
r=dynare_simul_(order-1,nstat,npred,nboth,nforw,...
nexog,ystart,shocks,vcov_exo,seed,ss,dr);
\ No newline at end of file
if err
error('Simulation failed')
end
\ No newline at end of file
......@@ -154,7 +154,7 @@ Approximation::walkStochSteady()
JournalRecordPair pa(journal);
pa << "Approximation about stochastic steady for sigma=" << sigma_so_far+dsigma << endrec;
Vector last_steady(model.getSteady());
Vector last_steady(const_cast<const Vector &>(model.getSteady()));
// calculate fix-point of the last rule for ‘dsigma’
/* We form the DRFixPoint object from the last rule with σ=dsigma. Then
......@@ -180,7 +180,7 @@ Approximation::walkStochSteady()
minus the old steady state. Then we create StochForwardDerivs object,
which calculates the derivatives of g** expectations at new sigma and
new steady. */
Vector dy(model.getSteady());
Vector dy(const_cast<const Vector &>(model.getSteady()));
dy.add(-1.0, last_steady);
StochForwardDerivs<Storage::fold> hh(ypart, model.nexog(), *rule_ders_ss, mom, dy,
......
......@@ -147,7 +147,7 @@ AtomAssignings::add_assignment(int asgn_off, const string &str, int name_len,
if (lname2expr.find(name) != lname2expr.end())
{
// Prevent the occurrence of #415
std::cerr << "Changing the value of " << name << " is not supported. Aborting." << std::endl;
std::cerr << "Changing the value of " << name << " through a second assignment (e.g. in initval) is not supported. Aborting." << std::endl;
exit(EXIT_FAILURE);
}
lname2expr[name] = order.size()-1;
......
/*
* Copyright © 2004 Ondra Kamenik
* Copyright © 2019 Dynare Team
* Copyright © 2019-2022 Dynare Team
*
* This file is part of Dynare.
*
......@@ -51,7 +51,7 @@ namespace TLStatic
init(int dim, int nvar)
{
// Check that tensor indices will not overflow (they are stored as signed int, hence on 31 bits)
if (std::log2(nvar)*dim > std::numeric_limits<int>::digits)
if (std::log2(nvar)*dim >= std::numeric_limits<int>::digits)
throw TLException(__FILE__, __LINE__, "Problem too large, you should decrease the approximation order");
std::lock_guard<std::mutex>{mut};
......
/*
* This file shows how to use "system prior"-type prior restrictions as in
* Michal Andrle/Miroslav Plašil (2018): "Econometrics with system priors",
* Economics Letters, 172, pp. 134-137 during estimation based on
* the baseline New Keynesian model of Jordi Galí (2015): Monetary Policy, Inflation,
* and the Business Cycle, Princeton University Press, Second Edition, Chapter 3
*
* THIS MOD-FILE REQUIRES DYNARE 4.5 OR HIGHER
*
* Notes:
* - The estimation will automatically take the Gali_2015_prior_restrictions.m into
* account, which has the required name and format
* - Estimation is based on simulated data
* - The file also shows how to use a prior/posterior-function
*
* This implementation was written by Johannes Pfeifer. In case you spot mistakes,
* email me at jpfeifer@gmx.de
*
* Please note that the following copyright notice only applies to this Dynare
*/
/*
* Copyright (C) 2021 Dynare Team
*
* This file is part of Dynare.
*
* Dynare is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Dynare is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Dynare. If not, see <http://www.gnu.org/licenses/>.
*/
var pi ${\pi}$ (long_name='inflation')
y_gap ${\tilde y}$ (long_name='output gap')
y_nat ${y^{nat}}$ (long_name='natural output') //(in contrast to the textbook defined in deviation from steady state)
y ${y}$ (long_name='output')
yhat ${\hat y}$ (long_name='output deviation from steady state')
r_nat ${r^{nat}}$ (long_name='natural interest rate')
i ${i}$ (long_name='nominal interrst rate')
n ${n}$ (long_name='hours worked')
nu ${\nu}$ (long_name='AR(1) monetary policy shock process')
a ${a}$ (long_name='AR(1) technology shock process')
z ${z}$ (long_name='AR(1) preference shock process')
p ${p}$ (long_name='price level')
w ${w}$ (long_name='nominal wage')
c ${c}$ (long_name='consumption')
;
varexo eps_a ${\varepsilon_a}$ (long_name='technology shock')
eps_nu ${\varepsilon_\nu}$ (long_name='monetary policy shock')
eps_z ${\varepsilon_z}$ (long_name='preference shock innovation')
;
parameters alppha ${\alpha}$ (long_name='capital share')
betta ${\beta}$ (long_name='discount factor')
rho_a ${\rho_a}$ (long_name='autocorrelation technology shock')
rho_nu ${\rho_{\nu}}$ (long_name='autocorrelation monetary policy shock')
rho_z ${\rho_{z}}$ (long_name='autocorrelation monetary demand shock')
siggma ${\sigma}$ (long_name='inverse EIS')
varphi ${\varphi}$ (long_name='inverse Frisch elasticity')
phi_pi ${\phi_{\pi}}$ (long_name='inflation feedback Taylor Rule')
phi_y ${\phi_{y}}$ (long_name='output feedback Taylor Rule')
eta ${\eta}$ (long_name='semi-elasticity of money demand')
epsilon ${\epsilon}$ (long_name='demand elasticity')
theta ${\theta}$ (long_name='Calvo parameter')
;
%----------------------------------------------------------------
% Parametrization, p. 67 and p. 113-115
%----------------------------------------------------------------
siggma = 1;
varphi = 5;
phi_pi = 1.5;
phi_y = 0.125;
theta = 3/4;
rho_nu =0.5;
rho_z = 0.5;
rho_a = 0.9;
betta = 0.99;
eta = 3.77; %footnote 11, p. 115
alppha = 1/4;
epsilon = 9;
%----------------------------------------------------------------
% First Order Conditions
%----------------------------------------------------------------
model(linear);
//Composite parameters
#Omega=(1-alppha)/(1-alppha+alppha*epsilon); %defined on page 60
#psi_n_ya=(1+varphi)/(siggma*(1-alppha)+varphi+alppha); %defined on page 62
#lambda=(1-theta)*(1-betta*theta)/theta*Omega; %defined on page 61
#kappa=lambda*(siggma+(varphi+alppha)/(1-alppha)); %defined on page 63
[name='New Keynesian Phillips Curve eq. (22)']
pi=betta*pi(+1)+kappa*y_gap;
[name='Dynamic IS Curve eq. (23)']
y_gap=-1/siggma*(i-pi(+1)-r_nat)+y_gap(+1);
[name='Interest Rate Rule eq. (26)']
i=phi_pi*pi+phi_y*yhat+nu;
[name='Definition natural rate of interest eq. (24)']
r_nat=-siggma*psi_n_ya*(1-rho_a)*a+(1-rho_z)*z;
[name='Definition natural output, eq. (20)']
y_nat=psi_n_ya*a;
[name='Definition output gap']
y_gap=y-y_nat;
[name='Monetary policy shock']
nu=rho_nu*nu(-1)+eps_nu;
[name='TFP shock']
a=rho_a*a(-1)+eps_a;
[name='Production function (eq. 14)']
y=a+(1-alppha)*n;
[name='Preference shock, p. 54']
z=rho_z*z(-1) - eps_z;
[name='Output deviation from steady state']
yhat=y-steady_state(y);
[name='Definition price level']
pi=p-p(-1);
[name='resource constraint, eq. (12)']
y=c;
[name='FOC labor, eq. (2)']
w-p=siggma*c+varphi*n;
end;
shocks;
var eps_nu = 0.0025^2; //1 standard deviation shock of 25 basis points, i.e. 1 percentage point annualized
end;
% simulate data
stoch_simul(periods=100,drop=0,irf=0) yhat;
% save data
datatomfile('sim_data',{'yhat'});
estimated_params;
theta,0.75,beta_pdf,0.5,0.1;
betta, beta_pdf, 0.993, 0.002;
alppha, beta_pdf, 0.25, 0.02;
end;
varobs yhat;
% Run prior function to get prior slope of the PC based on independent priors
hh=figure('Name','Slope of the Phillips Curve');
prior_function(function='Gali_2015_PC_slope');
PC_slope_vec=cell2mat(oo_.prior_function_results(:,1));
optimal_bandwidth = mh_optimal_bandwidth(PC_slope_vec,length(PC_slope_vec),0,'gaussian');
[density(:,1),density(:,2)] = kernel_density_estimate(PC_slope_vec,512,length(PC_slope_vec),optimal_bandwidth,'gaussian');
figure(hh)
subplot(3,1,1)
plot(density(:,1),density(:,2));
title('Prior')
% Run estimation with 1 observation to show effect of _prior_restriction .m
% on independent prior
estimation(datafile='sim_data',mode_compute=5,mh_replic=2001,mh_nblocks=1,diffuse_filter,nobs=1,mh_jscale=0.8);
posterior_function(function='Gali_2015_PC_slope');
PC_slope_vec=cell2mat(oo_.posterior_function_results(:,1));
optimal_bandwidth = mh_optimal_bandwidth(PC_slope_vec,length(PC_slope_vec),0,'gaussian');
[density(:,1),density(:,2)] = kernel_density_estimate(PC_slope_vec,512,length(PC_slope_vec),optimal_bandwidth,'gaussian');
figure(hh)
subplot(3,1,2)
plot(density(:,1),density(:,2));
title('Updated Prior')
% Run estimation with full observations
estimation(datafile='sim_data',mode_compute=5,mh_replic=2001,mh_nblocks=1,diffuse_filter,nobs=100,mh_jscale=0.8);
posterior_function(function='Gali_2015_PC_slope');
PC_slope_vec=cell2mat(oo_.posterior_function_results(:,1));
optimal_bandwidth = mh_optimal_bandwidth(PC_slope_vec,length(PC_slope_vec),0,'gaussian');
[density(:,1),density(:,2)] = kernel_density_estimate(PC_slope_vec,512,length(PC_slope_vec),optimal_bandwidth,'gaussian');
figure(hh)
subplot(3,1,3)
plot(density(:,1),density(:,2));
title('Posterior')
function output_cell =PC_slope(xparam1,M_,options_,oo_,estim_params_,bayestopt_,dataset_,dataset_info)
% output_cell =PC_slope(xparam1,M_,options_,oo_,estim_params_,bayestopt_,dataset_,dataset_info);
% This is an example file computing statistics on the prior/posterior draws. The
% function allows read-only access to all Dynare structures. However, those
% structures are local to this function. Changing them will not affect
% other Dynare functions and you cannot use them to pass results to other
% Dynare functions.
% The function takes one and only one output argument: an 1 by n cell.
% Using functions like cell2mat, the contents of the cell can be easily
% transformed back to matrices. See the fs2000_posterior_function.mod for
% an example
% INPUTS
% xparam1 Current parameter draw
% M_ [structure] Matlab's structure describing the Model (initialized by dynare, see @ref{M_}).
% options_ [structure] Matlab's structure describing the options (initialized by dynare, see @ref{options_}).
% oo_ [structure] Matlab's structure gathering the results (initialized by dynare, see @ref{oo_}).
% estim_params_[structure] Matlab's structure describing the estimated_parameters (initialized by dynare, see @ref{estim_params_}).
% bayestopt_ [structure] Matlab's structure describing the parameter options (initialized by dynare, see @ref{bayestopt_}).
% dataset_ [structure] Matlab's structure storing the dataset
% dataset_info [structure] Matlab's structure storing the information about the dataset
% Output
% output_cell [1 by n cell] 1 by n Matlab cell allowing to store any
% desired computation or result (strings, matrices, structures, etc.)
% Copyright (C) 2021 Dynare Team
%
% This file is part of Dynare.
%
% Dynare is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% Dynare is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with Dynare. If not, see <http://www.gnu.org/licenses/>.
%% store the slope based on the parameter draw
NumberOfParameters = M_.param_nbr;
for ii = 1:NumberOfParameters
paramname = deblank(M_.param_names{ii,:});
eval([ paramname ' = M_.params(' int2str(ii) ');']);
end
Omega=(1-alppha)/(1-alppha+alppha*epsilon);
lambda=(1-theta)*(1-betta*theta)/theta*Omega; %defined on page 61
output_cell{1,1}=lambda*(siggma+(varphi+alppha)/(1-alppha)); %defined on page 63
end
\ No newline at end of file
function log_prior_val=Gali_2015_prior_restrictions(M_, oo_, options_, dataset_, dataset_info);
% function prior_val=Gali_2015_prior_restrictions(M_, oo_, options_, dataset_, dataset_info);
% Example of a _prior_restrictions-file automatically called during
% estimation
% It imposes a prior of the slope of the New Keynesian Phillips Curve of
% 0.03. As the slope is a composite of other parameters with independent
% priors, a separate function is required to do this.
% Copyright (C) 2021 Dynare Team
%
% This file is part of Dynare.
%
% Dynare is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% Dynare is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with Dynare. If not, see <https://www.gnu.org/licenses/>.
% read out parameters to access them with their name
NumberOfParameters = M_.param_nbr;
for ii = 1:NumberOfParameters
paramname = M_.param_names{ii};
eval([ paramname ' = M_.params(' int2str(ii) ');']);
end
Omega=(1-alppha)/(1-alppha+alppha*epsilon);
lambda=(1-theta)*(1-betta*theta)/theta*Omega; %defined on page 61
kappa=lambda*(siggma+(varphi+alppha)/(1-alppha)); %defined on page 63
prior_mean=0.03;
prior_std=0.02;
log_prior_val=log(normpdf(kappa,prior_mean,prior_std));
\ No newline at end of file
......@@ -169,7 +169,7 @@ algorithm, while the parameters initialized here are only set once for the initi
values of the parameters they depend on:
gammma1=mu_z*mu_I/betta-(1-delta);
R=1+(PIbar*mu_z/betta-1);
Rbar=1+(PIbar*mu_z/betta-1);
Lambdax=exp(LambdaYd);
LambdaYd= (LambdaA+alppha*Lambdamu)/(1-alppha);
*/
......
......@@ -65,6 +65,7 @@ Lambdax=mu_z;
%set the parameter gammma1
gammma1=mu_z*mu_I/betta-(1-delta);
if gammma1<0 % parameter violates restriction; Preventing this cannot be implemented via prior restriction as it is a composite of different parameters and the valid prior region has unknown form
params=M_.params;
check=1; %set failure indicator
return; %return without updating steady states
end
......@@ -86,13 +87,20 @@ vp=(1-thetap)/(1-thetap*PI^((1-chi)*epsilon))*PIstar^(-epsilon);
vw=(1-thetaw)/(1-thetaw*PI^((1-chiw)*eta)*mu_z^eta)*PIstarw^(-eta);
tempvaromega=alppha/(1-alppha)*w/r*mu_z*mu_I;
try
%proper error handling for cases for infeasible initial value, which would result in error instead of valid exitflag
[ld,fval,exitflag]=fzero(@(ld)(1-betta*thetaw*mu_z^(eta-1)*PI^(-(1-chiw)*(1-eta)))/(1-betta*thetaw*mu_z^(eta*(1+gammma))*PI^(eta*(1-chiw)*(1+gammma)))...
-(eta-1)/eta*wstar/(varpsi*PIstarw^(-eta*gammma)*ld^gammma)*((1-h*mu_z^(-1))^(-1)-betta*h*(mu_z-h)^(-1))*...
((mu_A*mu_z^(-1)*vp^(-1)*tempvaromega^alppha-tempvaromega*(1-(1-delta)*(mu_z*mu_I)^(-1)))*ld-vp^(-1)*Phi)^(-1),0.25,options);
catch
exitflag = 0;
end
if exitflag <1
%indicate the SS computation was not sucessful; this would also be detected by Dynare
%setting the indicator here shows how to use this functionality to
%filter out parameter draws
params=M_.params;
check=1; %set failure indicator
return; %return without updating steady states
end
......
/*
* This file shows how to solve an RBC model with two occasionally binding constraints:
* 1. The INEG constraint implements quadratic capital adjustment costs if investment
* falls below its steady state. If investment is above steady state, there are no
* adjustment costs
* 2. The IRR constraint implements irreversible investment. Investment cannot be lower
* than a factor phi of its steady state.
*
* Notes:
* - This mod-file is based on an example originally provided by Luca Guerrieri
* and Matteo Iacoviello provided at https://www.matteoiacoviello.com/research_files/occbin_20140630.zip
* - The INEG constraint should theoretically be log_Invest-log(steady_state(Invest))<0, but this will lead
* to numerical issues. Instead we allow for a small negative value of <-0.000001
*
* Please note that the following copyright notice only applies to this Dynare
* implementation of the model.
*/
/*
* Copyright (C) 2021 Dynare Team
*
* This is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* It is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* For A copy of the GNU General Public License,
* see <https://www.gnu.org/licenses/>.
*/
var A $A$ (long_name='TFP')
C $C$ (long_name='consumption')
Invest $I$ (long_name='investment')
K $K$ (long_name='capital')
Lambda $\lambda$ (long_name='Lagrange multiplier')
log_K ${\hat K}$ (long_name='log capital')
log_Invest ${\hat I}$ (long_name='log investment')
log_C ${\hat C}$ (long_name='log consumption')
;
varexo epsilon $\varepsilon$ (long_name='TFP shock');
parameters alpha $\alpha$ (long_name='capital share')
delta $\delta$ (long_name='depreciation')
beta $\beta$ (long_name='discount factor')
sigma $\sigma$ (long_name='risk aversion')
rho $\rho$ (long_name='autocorrelation TFP')
phi $\phi$ (long_name='irreversibility fraction of steady state investment')
psi $\psi$ (long_name='capital adjustment cost')
;
beta=0.96;
alpha=0.33;
delta=0.10;
sigma=2;
rho = 0.9;
phi = 0.975;
psi = 5;
model;
// 1.
[name='Euler', bind = 'INEG']
-C^(-sigma)*(1+2*psi*(K/K(-1)-1)/K(-1))+ beta*C(+1)^(-sigma)*((1-delta)-2*psi*(K(+1)/K-1)*
(-K(+1)/K^2)+alpha*exp(A(+1))*K^(alpha-1))= -Lambda+beta*(1-delta)*Lambda(+1);
[name='Euler', relax = 'INEG']
-C^(-sigma) + beta*C(+1)^(-sigma)*(1-delta+alpha*exp(A(+1))*K^(alpha-1))= -Lambda+beta*(1-delta)*Lambda(+1);
// 2.
[name='Budget constraint',bind = 'INEG']
C+K-(1-delta)*K(-1)+psi*(K/K(-1)-1)^2=exp(A)*K(-1)^(alpha);
[name='Budget constraint',relax = 'INEG']
C+K-(1-delta)*K(-1)=exp(A)*K(-1)^(alpha);
// 3.
[name='LOM capital']
Invest = K-(1-delta)*K(-1);
// 4.
[name='investment',bind='IRR,INEG']
(log_Invest - log(phi*steady_state(Invest))) = 0;
[name='investment',relax='IRR']
Lambda=0;
[name='investment',bind='IRR',relax='INEG']
(log_Invest - log(phi*steady_state(Invest))) = 0;
// 5.
[name='LOM TFP']
A = rho*A(-1)+epsilon;
// Definitions
[name='Definition log capital']
log_K=log(K);
[name='Definition log consumption']
log_C=log(C);
[name='Definition log investment']
log_Invest=log(Invest);
end;
occbin_constraints;
name 'IRR'; bind log_Invest-log(steady_state(Invest))<log(phi); relax Lambda<0;
name 'INEG'; bind log_Invest-log(steady_state(Invest))<-0.000001; %not exactly 0 for numerical reasons
end;
steady_state_model;
K = ((1/beta-1+delta)/alpha)^(1/(alpha-1));
C = -delta*K +K^alpha;
Invest = delta*K;
log_K = log(K);
log_C = log(C);
log_Invest = log(Invest);
Lambda = 0;
A=0;
end;
shocks;
var epsilon; stderr 0.015;
end;
steady;
shocks(surprise);
var epsilon;
periods 1:9, 10, 50, 90, 130, 131:169;
values -0.0001, -0.01,-0.02, 0.01, 0.02, 0;
end;
occbin_setup;
occbin_solver(simul_periods=200,simul_check_ahead_periods=200);
occbin_graph log_C epsilon Lambda log_K log_Invest A;