Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • giovanma/dynare
  • giorgiomas/dynare
  • Vermandel/dynare
  • Dynare/dynare
  • normann/dynare
  • MichelJuillard/dynare
  • wmutschl/dynare
  • FerhatMihoubi/dynare
  • sebastien/dynare
  • lnsongxf/dynare
  • rattoma/dynare
  • CIMERS/dynare
  • FredericKarame/dynare
  • SumuduK/dynare
  • MinjeJeon/dynare
  • camilomrch/dynare
  • DoraK/dynare
  • avtishin/dynare
  • selma/dynare
  • claudio_olguin/dynare
  • jeffjiang07/dynare
  • EthanSystem/dynare
  • stepan-a/dynare
  • wjgatt/dynare
  • JohannesPfeifer/dynare
  • gboehl/dynare
  • chskcau/dynare-doc-fixes
27 results
Select Git revision
Show changes
Showing
with 0 additions and 3626 deletions
\chapter{Solving DSGE models - advanced topics} \label{ch:soladv}
This chapter is a collection of topics - not all related to each other - that you will probably find interesting or at least understandable, if you have read, and/ or feel comfortable with, the earlier chapter \ref{ch:solbase} on the basics of solving DSGE models. To provide at least some consistency, this chapter is divided into three sections. \textbf{The first section} deals directly with features of Dynare, such as dealing with correlated shocks, finding and saving your output, using loops, referring to external files and dealing with infinite eigenvalues. \textbf{The second section} overviews some of the inner workings of Dynare. The goal is to provide a brief explanation of the files that are created by Dynare to help you in troubleshooting or provide a starting point in case you actually want to customize the way Dynare works. \textbf{The third section} of the chapter focusses on modeling tips optimized for Dynare, but possibly also helpful for other work.\\
\section{Dynare features and functionality}
\subsection{Other examples}
Other examples of .mod files used to generate impulse response functions are available on the Dynare website. In particular, Jesus Fernandez-Villaverde has provided a series of RBC model variants (from the most basic to some including variable capacity utilization, indivisible labor and investment specific technological change). You can find these, along with helpful notes and explanations, in the \href{http://www.dynare.org/documentation-and-support/examples}{Official Examples} section of the Dynare website.\\
Also, don't forget to check occasionally the \href{http://www.dynare.org/phpBB3}{Dynare contributions and examples forum} to see if any other user has posted an example that could help you in your work; or maybe you would like to post an example there yourself?
\subsection{Alternative, complete example}
The following example aims to give you an alternative example to the one in chapter \ref{ch:solbase}, to learn the workings of Dynare. It also aims to give you exposure to dealing with \textbf{several correlated shocks}. Your model may have two or more shocks, and these may be correlated to each other. The example below illustrates how you would introduce this into Dynare. Actually, the example provided is somewhat more complete than strictly necessary. This is to give you an alternative, full-blown example to the one described in chapter \ref{ch:solbase}.
\subsubsection{The model}
The model is a simplified standard RBC model taken from \citet{CollardJuillard2003} which served as the original User Guide for Dynare. \\
The economy consists of an infinitely living representative agent who values consumption $c_t$ and labor services $h_t$ according to the following utility function
\[
\mathbb{E}_t \sum_{\tau=t}^\infty \beta^{\tau-t} \left( \log (c_t) - \theta \frac{h_t^{1+\psi}}{1+\psi} \right)
\]
where, as usual, the discount factor $0<\beta<1$, the disutility of labor $\theta > 0$ and the labor supply elasticity $\psi \geq 0$. \\
A social planner maximizes this utility function subject to the resource constraint
\[
c_t + i_t = y_t
\]
where $i_t$ is investment and $y_t$ output. Consumers are therefore also owners of the firms. The economy is a real economy, where part of output can be consumed and part invested to form physical capital. As is standard, the law of motion of capital is given by
\[
k_{t+1} = \exp (b_t)i_t + (1-\delta)k_t
\]
with $0<\delta<1$, where $\delta$ is physical depreciation and $b_t$ a shock affecting incorporated technological progress. \\
We assume output is produced according to a standard constant returns to scale technology of the form
\[
y_t = \exp (a_t)k_t^\alpha h_t^{1-\alpha}
\]
with $\alpha$ being the capital elasticity in the production function, with $0<\alpha<1$, and where $a_t$ represents a stochastic technological shock (or Solow residual). \\
Finally, we specify a \textbf{shock structure} that allows for shocks to display persistence across time and correlation in the current period. That is
\[
\left( \begin{array}{c}a_t \\
b_t \end{array} \right) = \left( \begin{array}{c c} \rho & \tau \\
\tau & \rho \end{array} \right) \left( \begin{array}{c}a_{t-1} \\
b_{t-1} \end{array} \right) + \left( \begin{array}{c}\epsilon_t \\
\nu_t \end{array} \right)
\]
where $|\rho + \tau|<1$ and $|\rho - \tau|<1$ to ensure stationarity (we call $\rho$ the coefficient of persistence and $\tau$ that of cross-persistence). Furthermore, we assume $\mathbb{E}_t (\epsilon_t)=0$, $\mathbb{E}_t (\nu_t)=0$ and that the contemporaneous variance-covariance matrix of the innovations $\epsilon_t$ and $\nu_t$ is given by
\[
\left( \begin{array}{c c} \sigma_\epsilon^2 & \psi \sigma_\epsilon \sigma_\nu \\
\psi \sigma_\epsilon \sigma_\nu & \sigma_\nu^2 \end{array} \right)
\]
and where $corr(\epsilon_t \nu_s)=0$, $corr(\epsilon_t \epsilon_s)=0$ and $corr(\nu_t \nu_s)=0$ for all $t \neq s$. \\
This system - probably quite similar to standard RBC models you have run into - yields the following first order conditions (which are straightforward to reproduce in case you have doubts\ldots) and equilibrium conditions drawn from the description above. Note that the first equation captures the labor supply function and the second the intertemporal consumption Euler equation.
\[
\begin{aligned}
c_t \theta h_t^{1+\psi} = (1-\alpha)y_t \\
1= \beta \mathbb{E}_t \left[ \left( \frac{\exp(b_t)c_t}{\exp(b_{t+1})c_{t+1}} \right) \left( \exp(b_{t+1}) \alpha \frac{y_{t+1}}{k_{t+1}}+1-\delta \right) \right] \\
y_t = \exp (a_t)k_t^\alpha h_t^{1-\alpha} \\
k_{t+1} = \exp (b_t)i_t + (1-\delta)k_t \\
a_t = \rho a_{t-1} + \tau b_{t-1} + \epsilon_t \\
b_t = \tau a_{t-1} + \rho b_{t-1} + \nu_t
\end{aligned}
\]
\subsubsection{The .mod file}
To ``translate'' the model into a language understandable by Dynare, we would follow the steps outlined in chapter \ref{ch:solbase}. We will assume that you're comfortable with these and simply present the final .mod file below. Fist, though, note that to introduce shocks into Dynare, we have two options (this was not discussed in the earlier chapter). Either write:\\
\\
\texttt{shocks;\\
var e; stderr 0.009;\\
var u; stderr 0.009;\\
var e, u = phi*0.009*0.009;\\
end;}\\
\\
where the last line specifies the contemporaneous correlation between our two exogenous variables. \\
Alternatively, you can also write: \\
\\
\texttt{shocks;\\
var e = 0.009\textasciicircum 2;\\
var u = 0.009\textasciicircum 2;\\
var e, u = phi*0.009*0.009;\\
end;}\\
So that you can gain experience by manipulating the entire model, here is the complete .mod file corresponding to the above example. You can find the corresponding file in the \textsl{models} folder under \textsl{UserGuide} in your installation of Dynare. The file is called \textsl{Alt\_Ex1.mod}. \\
\\
\\
\texttt{var y, c, k, a, h, b;\\
varexo e, u;\\
parameters beta, rho, alpha, delta, theta, psi, tau;\\
\\
alpha = 0.36;\\
rho = 0.95;\\
tau = 0.025;\\
beta = 0.99;\\
delta = 0.025;\\
psi = 0;\\
theta = 2.95;\\
\\
phi = 0.1;\\
\\
model;\\
c*theta*h\textasciicircum (1+psi)=(1-alpha)*y;\\
k = beta*(((exp(b)*c)/(exp(b(+1))*c(+1)))\\
*(exp(b(+1))*alpha*y(+1)+(1-delta)*k));\\
y = exp(a)*(k(-1)\textasciicircum alpha)*(h\textasciicircum (1-alpha));\\
k = exp(b)*(y-c)+(1-delta)*k(-1);\\
a = rho*a(-1)+tau*b(-1) + e;\\
b = tau*a(-1)+rho*b(-1) + u;\\
end;\\
\\
initval;\\
y = 1.08068253095672;\\
c = 0.80359242014163;\\
h = 0.29175631001732;\\
k = 5;\\
a = 0;\\
b = 0;\\
e = 0;\\
u = 0;\\
end;\\
\\
shocks;\\
var e; stderr 0.009;\\
var u; stderr 0.009;\\
var e, u = phi*0.009*0.009;\\
end;\\
\\
stoch\_simul(periods=2100);}\\
\subsection{Finding, saving and viewing your output} \label{sec:FindOut}
Where is output stored? Most of the moments of interest are stored in global variable \texttt{oo\_} You can easily browse this global variable in Matlab by either calling it in the command line, or using the workspace interface. In global variable \texttt{oo\_} you will find the following (\textsf{\textbf{NOTE!}} variables will always appear in the order in which you declared them in the preamble block of your .mod file):
\begin{itemize}
\item \texttt{steady\_state}: the steady state of your variables
\item \texttt{mean}: the mean of your variables
\item \texttt{var}: the variance of your variables
\item \texttt{autocorr}: the various autocorrelation matrices of your variables. Each row of these matrices will correspond to a variables in time $t$, and columns correspond to the variables lagged 1, for the first matrix, then lagged 2 for the second matrix, and so on. Thus, the matrix of autocorrelations that is automatically displayed in the results after running \texttt{stoch\_simul} has, running down each column, the diagonal elements of each of the various autocorrelation matrices described here.
\item \texttt{gamma\_y}: the matrices of autocovariances. \texttt{gamma\_y\{1\}} represents variances, while \texttt{gamma\_y\{2\}} represents autocovariances where variables on each column are lagged by one period and so on. By default, Dynare will return autocovariances with a lag of 5. The last matrix (\texttt{gamma\_y\{7\}} in the default case) returns the \textbf{variance decomposition}, where each column captures the independent contribution of each shock to the variance of each variable.
\end{itemize}
Furthermore, if you decide to run impulse response functions, you will find a global variable \texttt{oo\_.irfs} comprising of vectors named \texttt{endogenous variable\_exogenous variable}, like \texttt{y\_e}, reporting the values of the endogenous variables corresponding to the impulse response functions, as a result of the independent impulse of each exogenous shock. \\
To save your simulated variables, you can add the following command at the end of your .mod file: \texttt{dynasave (FILENAME) [variable names separated by commas]} If no variable names are specified in the optional field, Dynare will save all endogenous variables. In Matlab, variables saved with the \texttt{dynasave} command can be retrieved by using the Matlab command \texttt{load -mat FILENAME}.
\subsection{Referring to external files}
You may find it convenient to refer to an external file, either to compute the steady state of your model, or when specifying shocks in an external file. The former is described in section \ref{sec:ssshock} of chapter \ref{ch:solbase} when discussing steady states. The advantage of using Matlab, say, to find your model's steady state was clear with respect to Dynare version 3, as the latter resorted to numerical approximations to find steady state values. But Dynare version 4 now uses the same analytical methods available in Matlab. For most usage scenarios, you should therefore do just as well to ask Dynare to compute your model's steady state (except, maybe, if you want to run loops, to vary your parameter values, for instance, in which case writing a Matlab program may be more handy).\\
But you may also be interested in the second possibility described above,
namely of specifying shocks in an external file, to simulate a model based on
shocks from a prior estimation, for instance. You could then retrieve the
exogenous shocks from the oo\_ file by saving them in a file called
datafile.mat. Finally, you could simulate a deterministic model with the shocks
saved from the estimation by specifying the source file for the shocks, using
the \\ \mbox{\texttt{initval\_file(filename = 'datafile.mat')}} command.
But of course, this is a bit of a workaround, since you could also use the built-in commands in Dynare to generate impulse response functions from estimated shocks, as described in chapter \ref{ch:estbase}. \\
\subsection{Infinite eigenvalues}
If you use the command \texttt{check} in your .mod file, Dynare will report your system's eigenvalues and tell you if these meet the Blanchard-Kahn conditions. At that point, don't worry if you get infinite eigenvalues - these are are firmly grounded in the theory of generalized eigenvalues. They have no detrimental influence on the solution algorithm. As far as Blanchard-Kahn conditions are concerned infinite eigenvalues are counted as explosive roots of modulus larger than one. \\
\section{Files created by Dynare} \label{sec:dynfiles}
At times, you may get a message that there is an error in a file with a new name, or you may want to have a closer look at how Dynare actually solves your model - out of curiosity or maybe to do some customization of your own. You may therefore find it helpful to get a brief overview of the internal files that Dynare generates and the function of each one. \\
The dynare pre-processors essentially does three successive tasks:
(i) parsing of the mod file (it checks that the mod file is syntactically
correct), and its translation into internal machine representation (in
particular, model equations are translated into expression trees), (ii) symbolic derivation of the model equations, up to the needed order
(depending on the computing needs), (iii) outputting of several files, which are used from matlab. If the mod
file is ``filename.mod", then the pre-processor creates the following
files:
\begin{itemize}
\item \textbf{filename.m}: a matlab file containing several instructions, notably
the parameter initializations and the matlab calls corresponding to
computing tasks
\item \textbf{filename\_dynamic.m}: a matlab file containing the model equations and
their derivatives (first, second and maybe third).
Endogenous variables (resp. exogenous variables, parameters) are
contained in a ``y" (resp. ``x", ``params") vector, with an index number
depending on the declaration order.
The ``y" vector has as many entries as their are (variable, lag)
pairs in the declared model.
The model equations residuals are stored in a vector named
``residuals".
The model jacobian is put in ``g1" matrix. Second (resp. third)
derivatives are in ``g2" matrix (resp. ``g3").
If the ``use\_dll" option has been specified in the model declaration,
the pre-processor will output a C file (with .c extension) rather than a
matlab file. It is then compiled to create a library (DLL) file. Using a
compiled C file is supposed to give better computing performance in
model simulation/estimation.
\item \textbf{filename\_static.m}: a matlab file containing the stationarized
version of the model (i.e. where lagged variables are replaced by
current variables), with its jacobian. Used to compute the steady state.
Same notations than the dynamic file. Replaced by a C file when
``use\_dll" option is specified.
\end{itemize}
\section{Modeling tips}
\subsection{Stationarizing your model}
Models in Dynare must be stationary, such that you can linearize them around a steady state and return to steady state after a shock. Thus, you must first stationarize your model, then linearize it, either by hand, or by letting Dynare do the work. You can then reconstruct ex-post the non-stationary simulated variables after running impulse response functions.\\
For deterministic models, the trick is to use only stationary variables in $t+1$. More generally, if $y_t$ is $I(1)$, you can always write $y_{t+1}$ as $y_t+dy_{t+1}$, where $dy_t= y_t-y_{t-1}$. Of course, you need to know the value of $dy_t$ at the final equilibrium.\\
Note that in a stationary model, it is expected that variables will eventually go back to steady state after the initial shock. If you expect to see a growing curve for a variable, you are thinking about a growth model. Because growth models are nonstationary, it is easier to work with the stationarized version of such models. Again, if you know the trend, you can always add it back after the simulation of the stationary components of the variables.
\subsection{Expectations taken in the past}
For instance, to enter the term $\mathbb{E}_{t-1}y_t$, define $s_t=\mathbb{E}_t[y_{t+1}]$ and then use $s(-1)$ in your .mod file. Note that, because of Jensen's inequality, you cannot do this for terms that enter your equation in a non-linear fashion. If you do have non-linear terms on which you want to take expectations in the past, you would need to apply the above manipulation to the entire equation, as if $y_t$ were an equation, not just a variable.
\subsection{Infinite sums}
Dealing with infinite sums is tricky in general, and needs particular care when working with Dynare. The trick is to use a \textbf{recursive
representation} of the sum. For example, suppose your model included:
\[\sum^{\infty}_{j=0}\beta^j x_{t+j}=0, \]
Note that the above can also be written by using an auxiliary variable $S_t$, defined as:
\[S_t\equiv \sum^{\infty}_{j=0}\beta^j x_{t+j},\]
which can also be written in the following recursive manner:
\[S_t\equiv \sum^{\infty}_{j=0}\beta^j x_{t+j}=x_t +\sum^{\infty}_{j=1}\beta^j
x_{t+j}=x_t+\beta\sum^{\infty}_{j=0}\beta^j x_{t+1+j} \equiv x_t + S_{t+1}\]
This formulation turns out to be useful in problems of the following
form:
\[\sum^{\infty}_{j=0}\beta^j x_{t+j}=p_t\sum^{\infty}_{j=0}\gamma^j y_{t+j}, \]
which can be written as a recursive system of the form:
\[S1_t=x_t+\beta S1_{t+1},\]
\[S2_t=y_t+\gamma S2_{t+1},\]
\[S1=p_t S2.\]
This is particularly helpful, for instance, in a \textbf{Calvo type setting}, as illustrated in the following brief example. The RBC model with monopolistic competition introduced in chapter \ref{ch:solbase} involved flexible prices. The extension with sticky prices, ˆ la Calvo for instance, is instead typical of the new Keynesian monetary literature, exemplified by papers such as \citet{ClaridaGaliGertler1999}. \\
The optimal price for a firm resetting its price in period $t$, given that it will be able to reset its price only with probability $1-\theta$ each period, is
\[
p_t^*(i) = \mu + (1-\beta \theta) \sum_{k=0}^\infty (\beta \theta)^k \mathbb{E}_t [mc_{t+k}^n (i)]
\]
where $\mu$ is the markup, $\beta$ is a discount factor, $i$ represents a firm of the continuum between 0 and 1, and $mc_t$ is marginal cost as described in the example in chapter \ref{ch:solbase}. The trouble, of course, is \textbf{how to input this infinite sum into Dynare}? \\
It turns out that the Calvo price setting implies that the aggregate price follows the equation of motion $p_t = \theta p_{t-1} + (1-\theta) p_t^*$, thus implying the following inflation relationship $\pi_t = (1-\theta) (p_t^* - p_{t-1})$. Finally, we can also rewrite the optimal price setting equation, after some algebraic manipulations, as
\[
p_t^* - p_{t-1} = (1-\beta \theta) \sum_{k=0}^\infty (\beta \theta)^k \mathbb{E}_t [\widehat{mc}_{t+k}] + \sum_{k=0}^\infty (\beta \theta)^k \mathbb{E}_t [\pi_{t+k}]
\]
where $\widehat{mc}_{t+k} = mc_{t+k} + \mu$ is the deviation of the marginal cost from its natural rate, defined as the marginal cost when prices are perfectly flexible. \\
The trick now is to note that the above can be written recursively, by writing the right hand side as the first term of the sum (with $k=0$) plus the remainder of the sum, which can be written as the left hand side term scrolled forward one period and appropriately discounted. Mathematically, this yields:
\[
p_t^* - p_{t-1} =Ê(1-\beta \theta) \widehat{mc}_{t+k} + \pi_t + \betaÊ\theta \mathbb{E}_t [p_{t+1}^* - p_t]
\]
which has gotten rid of our infinite sum! That would be enough for Dynare, but for convenience, we can go one step further and write the above as
\[
\pi_t = \beta \mathbb{E}_t [\pi_{t+1}] + \lambda \widehat{mc}_{t}
\]
where $\lambda \equiv \frac{(1-\theta)(1-\beta\theta)}{\theta}$, which is the recognizable inflation equation in the new Keynesian (or new Neoclassical) monetary literature.
\subsection{Infinite sums with changing timing of expectations}
When you are not able to write an infinite sum recursively, as the index of the expectations changes with each element of the sum, as in the following example, a different approach than the one mentioned above is necessary. \\
Suppose your model included the following sum:
\[
y_t=\sum_{j=0}^{\infty} \mathbb{E}_{t-j}x_t
\]
where $y_t$ and $x_t$ are endogenous variables. \\
In Dynare, the best way to handle this is to write out the first $k$ terms explicitly and enter each one in Dynare, such as: $\mathbb{E}_{t-1}x_t + \mathbb{E}_{t-2}x_t+\ldots + \mathbb{E}_{t-k}x_t$.
\chapter{Solving DSGE models - basics} \label{ch:solbase}
This chapter covers everything that leads to, and stems from, the solution of DSGE models; a vast terrain. That is to say that the term ``solution'' in the title of the chapter is used rather broadly. You may be interested in simply finding the solution functions to a set of first order conditions stemming from your model, but you may also want to go a bit further. Typically, you may be interested in how this system behaves in response to shocks, whether temporary or permanent. Likewise, you may want to explore how the system comes back to its steady state or moves to a new one. This chapter covers all these topics. But instead of skipping to the topic closest to your needs, we recommend that you read this chapter chronologically, to learn basic Dynare commands and the process of writing a proper .mod file - this will serve as a base to carry out any of the above computations.
\section{A fundamental distinction} \label{sec:distcn}
Before speaking of Dynare, it is important to recognize a distinction in model types. This distinction will appear throughout the chapter; in fact, it is so fundamental, that we considered writing separate chapters altogether. But the amount of common material - Dynare commands and syntax - is notable and writing two chapters would have been overly repetitive. Enough suspense; here is the important question: \textbf{is your model stochastic or deterministic?}\\
The distinction hinges on \textbf{whether future shocks are known}. In deterministic models, the occurrence of all future shocks is known exactly at the time of computing the model's solution. In stochastic models, instead, only the distribution of future shocks is known. Let's consider a shock to a model's innovation only in period 1. In a deterministic context, agents will take their decisions knowing that future values of the innovations will be zero in all periods to come. In a stochastic context, agents will take their decisions knowing that the future value of innovations are random but will have zero mean. This isn't the same thing because of Jensen's inequality. Of course, if you consider only a first order linear approximation of the stochastic model, or a linear model, the two cases become practically the same, due to certainty equivalence. A second order approximation will instead lead to very different results, as the variance of shocks will matter. \\
The solution method for each of these model types differs significantly. In deterministic models, a highly accurate solution can be found by numerical methods. The solution is nothing more than a series of numbers that match a given set of equations. Intuitively, if an agent has perfect foresight, she can specify today - at the time of making her decision - what each of her precise actions will be in the future. In a stochastic environment, instead, the best the agent can do is specify a decision, policy or feedback rule for the future: what will her optimal actions be contingent on each possible realization of shocks. In this case, we therefore search for a function satisfying the model's first order conditions. To complicate things, this function may be non-linear and thus needs to be approximated. In control theory, solutions to deterministic models are usually called ``closed loop'' solutions, and those to stochastic models are referred to as ``open loop''.\\
Because this distinction will resurface again and again throughout the chapter, but also because it has been a source of significant confusion in the past, the following gives some additional details.
\subsection{\textsf{NOTE!} Deterministic vs stochastic models} \label{sec:detstoch}
\textbf{Deterministic} models have the following characteristics:
\begin{enumerate}
\item As the DSGE (read, ``stochastic'', i.e. not deterministic!) literature has gained attention in economics, deterministic models have become somewhat rare. Examples include OLG models without aggregate uncertainty.
\item These models are usually introduced to study the impact of a change in regime, as in the introduction of a new tax, for instance.
\item Models assume full information, perfect foresight and no uncertainty around shocks.
\item Shocks can hit the economy today or at any time in the future, in which case they would be expected with perfect foresight. They can also last one or several periods.
\item Most often, though, models introduce a positive shock today and zero shocks thereafter (with certainty).
\item The solution does not require linearization, in fact, it doesn't even really need a steady state. Instead, it involves numerical simulation to find the exact paths of endogenous variables that meet the model's first order conditions and shock structure.
\item This solution method can therefore be useful when the economy is far away from steady state (when linearization offers a poor approximation).
\end{enumerate}
\textbf{Stochastic} models, instead, have the following characteristics:
\begin{enumerate}
\item These types of models tend to be more popular in the literature. Examples include most RBC models, or new keynesian monetary models.
\item In these models, shocks hit today (with a surprise), but thereafter their expected value is zero. Expected future shocks, or permanent changes in the exogenous variables cannot be handled due to the use of Taylor approximations around a steady state.
\item Note that when these models are linearized to the first order, agents behave as if future shocks where equal to zero (since their expectation is null), which is the \textbf{certainty equivalence property}. This is an often overlooked point in the literature which misleads readers in supposing their models may be deterministic.
\end{enumerate}
\section{Introducing an example}
The goal of this first section is to introduce a simple example. Future sections will aim to code this example into Dynare and analyze its salient features under the influence of shocks - both in a stochastic and a deterministic environment. Note that as a general rule, the examples in the basic chapters, \ref{ch:solbase} and \ref{ch:estbase}, are kept as bare as possible, with just enough features to help illustrate Dynare commands and functionalities. More complex examples are instead presented in the advanced chapters.\\
The model introduced here is a basic RBC model with monopolistic competition, used widely in the literature. Its particular notation adopted below is drawn mostly from notes available on Jesus Fernandez-Villaverde's very instructive \href{http://www.econ.upenn.edu/~jesusfv/}{website}; this is a good place to look for additional information on any of the following model set-up and discussion. Note throughout this model description that the use of \textbf{expectation} signs is really only relevant in a stochastic setting, as per the earlier discussion. We will none-the-less illustrate both the stochastic and the deterministic settings on the basis of this example. Thus, when thinking of the latter, you'll have to use a bit of imagination (on top of that needed to think you have perfect foresight!) to ignore the expectation signs.\\
Households maximize utility over consumption, $c_t$ and leisure, $1-l_t$, where $l_t$ is labor input, according to the following utility function
\[
\mathbb{E}_t \sum_{t=0}^{\infty} \beta \left[ \log c_t + \psi \log(1-l_t) \right]
\]
and subject to the following budget constraint
\[
c_t + k_{t+1}=w_t l_t + r_t k_t + (1-\delta)k_t, \qquad \forall t>0
\]
where $k_t$ is capital stock, $w_t$ real wages, $r_t$ real interest rates or cost of capital and $\delta$ the depreciation rate. \\
The above equation can be seen as an accounting identity, with total expenditures on the left hand side and revenues - including the liquidation value of the capital stock - on the right hand side. Alternatively, with a little more imagination, the equation can also be interpreted as a capital accumulation equation after bringing $c_t$ to the right hand side and noticing that $w_t l_t + r_t k_t$, total payments to factors, equals $y_t$, or aggregate output, by the zero profit condition. As a consequence, if we define investment as $i_t=y_t - c_t$, we obtain the intuitive result that $i_t=k_{t+1} - (1-\delta) k_{t}$, or that investment replenishes the capital stock thereby countering the effects of depreciation. In any given period, the consumer therefore faces a tradeoff between consuming and investing in order to increase the capital stock and consuming more in following periods (as we will see later, production depends on capital).\\
Maximization of the household problem with respect to consumption, leisure and capital stock, yields the Euler equation in consumption, capturing the intertemporal tradeoff mentioned above, and the labor supply equation linking labor positively to wages and negatively to consumption (the wealthier, the more leisure due to the decreasing marginal utility of consumption). These equation are
\[
\frac{1}{c_t}=\beta \mathbb{E}_t \left[ \frac{1}{c_{t+1}} \left( 1 + r_{t+1} - \delta \right) \right]
\]
and
\[
\psi \frac{c_t}{1-l_t}= w_t
\]
The firm side of the problem is slightly more involved, due to monopolistic competition, but is presented below in the simplest possible terms, with a little hand-waiving involved, as the derivations are relatively standard. \\
There are two ways to introduce monopolistic competition. We can either assume that firms sell differentiated varieties of a good to consumers who aggregate these according to a CES index. Or we can postulate that there is a continuum of intermediate producers with market power who each sell a different variety to a competitive final goods producer whose production function is a CES aggregate of intermediate varieties.\\
If we follow the second route, the final goods producer chooses his or her optimal demand for each variety, yielding the Dixit-Stiglitz downward sloping demand curve. Intermediate producers, instead, face a two pronged decision: how much labor and capital to employ given these factors' perfectly competitive prices and how to price the variety they produce.\\
Production of intermediate goods follows a CRS production function defined as
\[
y_{it} = k_{it}^\alpha (e^{z_t} l_{it})^{1-\alpha}
\]
where the $i$ subscript stands for firm $i$ of a continuum of firms between zero and one and where $\alpha$ is the capital elasticity in the production function, with $0<\alpha<1$. Also, $z_t$ captures technology which evolves according to
\[
z_t = \rho z_{t-1} + e_t
\]
where $\rho$ is a parameter capturing the persistence of technological progress and $e_t \thicksim \mathcal{N}(0,\sigma)$. \\
The solution to the sourcing problem yields an optimal capital to labor ratio, or relationship between payments to factors:
\[
k_{it}r_t=\frac{\alpha}{1-\alpha}w_tl_{it}
\]
The solution to the pricing problem, instead, yields the well-known constant markup pricing condition of monopolistic competition:
\[
p_{it}=\frac{\epsilon}{\epsilon -1}mc_t p_t
\]
where $p_{it}$ is firm $i$'s specific price, $mc_t$ is real marginal cost and $p_t$ is the aggregate CES price or average price. An additional step simplifies this expression: symmetric firms implies that all firms charge the same price and thus $p_{it}=p_t$; we therefore have: $mc_t = (\epsilon - 1)/\epsilon$ \\
But what are marginal costs equal to? To find the answer, we combine the optimal capital to labor ratio into the production function and take advantage of its CRS property to solve for the amount of labor or capital required to produce one unit of output. The real cost of using this amount of any one factor is given by $w_tl_{it} + r_tk_{it}$ where we substitute out the payments to the other factor using again the optimal capital to labor ratio. When solving for labor, for instance, we obtain
\[
mc_t = \left( \frac{1}{1-\alpha}Ê\right)^{1-\alpha} \left( \frac{1}{\alpha}Ê\right)^\alpha \frac{1}{A_t}w_t^{1-\alpha} r_t^\alpha
\]
which does not depend on $i$; it is thus the same for all firms. \\
Interestingly, the above can be worked out, by using the optimal capital to labor ratio, to yield $w_t [(1-\alpha)y_{it}/l_{it}]^{-1}$, or $w_t \frac{\partial l_{it}}{\partial y_{it}}$, which is the definition of marginal cost: the cost in terms of labor input of producing an additional unit of output. This should not be a surprise since the optimal capital to labor ratio follows from the maximization of the production function (minus real costs) with respect to its factors. \\
Combining this result for marginal cost, as well as its counterpart in terms of capital, with the optimal pricing condition yields the final two important equations of our model
\[
w_t = (1-\alpha) \frac{y_{it}}{l_{it}} \frac{(\epsilon-1)}{\epsilon}
\]
and
\[
r_t = \alpha \frac{y_{it}}{k_{it}} \frac{(\epsilon-1)}{\epsilon}
\]
To end, we aggregate the production of each individual firm to find an aggregate production function. On the supply side, we factor out the capital to labor ratio, $k_t/l_t$, which is the same for all firms and thus does not depend on $i$. On the other side, we have the Dixit-Stiglitz demand for each variety. By equating the two and integrating both side, and noting that price dispersion is null - or that, as hinted earlier, $p_{it}=p_t$ - we obtain aggregate production
\[
y_t = A_t k_t^\alpha l_t^{1-\alpha}
\]
which can be shown is equal to the aggregate amount of varieties bought by the final good producer (according to a CES aggregation index) and, in turn, equal to the aggregate output of final good, itself equal to household consumption. Note, to close, that because the ratio of output to each factor is the same for each intermediate firm and that firm specific as well as aggregate production is CRS, we can rewrite the above two equations for $w_t$ and $r_t$ without the $i$ subscripts on the right hand side. \\
This ends the exposition of the example. Now, let's roll up our sleeves and see how we can input the model into Dynare and actually test how the model will respond to shocks.
\section{Dynare .mod file structure}
Input into Dynare involves the .mod file, as mentioned loosely in the introduction of this Guide. The .mod file can be written in any editor, external or internal to Matlab. It will then be read by Matlab by first navigating within Matlab to the directory where the .mod file is stored and then by typing in the Matlab command line \texttt{Dynare filename.mod;} (although actually typing the extension .mod is not necessary). But before we get into executing a .mod file, let's start by writing one! \\
It is convenient to think of the .mod file as containing four distinct blocks, illustrated in figure \ref{fig:modstruct}:
\begin{itemize}
\item \textbf{preamble}: lists variables and parameters
\item \textbf{model}: spells out the model
\item \textbf{steady state or initial value}: gives indications to find the steady state of a model, or the starting point for simulations or impulse response functions based on the model's solution.
\item \textbf{shocks}: defines the shocks to the system
\item \textbf{computation}: instructs Dynare to undertake specific operations (e.g. forecasting, estimating impulse response functions)
\end{itemize}
Our exposition below will structured according to each of these blocks.
\begin{figure} \label{fig:modstruct}
\begin{center}
\includegraphics[width=1.0\textwidth]{P_ModStruct5}
\end{center}
\caption[Structure of the .mod file]{The .mod file contains five logically distinct parts.}
\end{figure}
\section{Filling out the preamble} \label{sec:preamble}
The preamble generally involves three commands that tell Dynare what are the model's variables, which are endogenous and what are the parameters. The \textbf{commands} are:
\begin{itemize}
\item \texttt{var} starts the list of endogenous variables, to be separated by commas.
\item \texttt{varexo} starts the list of exogenous variables that will be shocked.
\item \texttt{parameters} starts the list of parameters and assigns values to each.
\end{itemize}
In the case of our example, let's differentiate between the stochastic and deterministic cases. First, we lay these out, then we discuss them.
\subsection{The deterministic case}
The model is inherited exactly as specified in the earlier description, except that we no longer need the $e_t$ variable, as we can make $z_t$ directly exogenous. Thus, the \textbf{preamble would look like}:\\
\\
\texttt{var y c k i l y\_l w r;\\
varexo z;\\
parameters beta psi delta alpha sigma epsilon;\\
alpha = 0.33;\\
beta = 0.99;\\
delta = 0.023;\\
psi = 1.75;\\
sigma = (0.007/(1-alpha));\\
epsilon = 10;}\\
\subsection{The stochastic case}
In this case, we go back to considering the law of motion for technology, consisting of an exogenous shock, $e_t$. With respect to the above, we therefore adjust the list of endogenous and exogenous variables, and add the parameter $\rho$. Here's what the \textbf{preamble would look like}:\\
\\
\texttt{var y c k i l y\_l w r z;\\
varexo e;\\
parameters beta psi delta alpha rho sigma epsilon;\\
alpha = 0.33;\\
beta = 0.99;\\
delta = 0.023;\\
psi = 1.75;\\
rho = 0.95; \\
sigma = (0.007/(1-alpha));\\
epsilon = 10;}\\
\subsection{Comments on your first lines of Dynare code}
As you can tell, writing a .mod file is really quite straightforward. Two quick comments:\\
\textsf{\textbf{NOTE!}} Remember that each instruction of the .mod file must be terminated by a semicolon (;), although a single instruction can span two lines if you need extra space (just don't put a semicolon at the end of the first line).\\
\textsf{\textbf{TIP!}} You can also comment out any line by starting the line with two forward slashes (//), or comment out an entire section by starting the section with /* and ending with */. For example:\\
\\
\texttt{var y c k i l y\_l w r z;\\
varexo e;\\
parameters beta psi delta \\
alpha rho sigma epsilon;\\
// the above instruction reads over two lines\\
/*\\
the following section lists\\
several parameters which were\\
calibrated by my co-author. Ask\\
her all the difficult questions!\\
*/\\
alpha = 0.33;\\
beta = 0.99;\\
delta = 0.023;\\
psi = 1.75;\\
rho = 0.95; \\
sigma = (0.007/(1-alpha));\\
epsilon = 10;}\\
\\
\section{Specifying the model} \label{sec:modspe}
\subsection{Model in Dynare notation}
One of the beauties of Dynare is that you can \textbf{input your model's equations naturally}, almost as if you were writing them in an academic paper. This greatly facilitates the sharing of your Dynare files, as your colleagues will be able to understand your code in no-time. There are just a few conventions to follow. Let's first have a look at our \textbf{model in Dynare notation}, and then go through the various Dynare input conventions. What you can already try to do is glance at the model block below and see if you can recognize the equations from the earlier example. See how easy it is to read Dynare code? \\
\\
\texttt{model;\\
(1/c) = beta*(1/c(+1))*(1+r(+1)-delta);\\
psi*c/(1-l) = w;\\
c+i = y;\\
y = (k(-1)\textasciicircum alpha)*(exp(z)*l)\textasciicircum (1-alpha);\\
w = y*((epsilon-1)/epsilon)*(1-alpha)/l;\\
r = y*((epsilon-1)/epsilon)*alpha/k(-1);\\
i = k-(1-delta)*k(-1);\\
y\_l = y/l;\\
z = rho*z(-1)+e;\\
end;}\\
Just in case you need a hint or two to recognize these equations, here's a brief description: the first equation is the Euler equation in consumption. The second the labor supply function. The third the accounting identity. The fourth is the production function. The fifth and sixth are the marginal cost equal to markup equations. The seventh is the investment equality. The eighth an identity that may be useful and the last the equation of motion of technology.
\textsf{\textbf{NOTE!}} that the above model specification corresponds to the \textbf{stochastic case}; indeed, notice that the law of motion for technology is included, as per our discussion of the preamble. The corresponding model for the \textbf{deterministic casce} would simply loose the last equation.
\subsection{General conventions}
The above example illustrates the use of a few important commands and conventions to translate a model into a Dynare-readable .mod file.
\begin{itemize}
\item The first thing to notice, is that the model block of the .mod file begins with the command \texttt{model} and ends with the command \texttt{end}.
\item Second, in between, there need to be as many equations as you declared endogenous variables (this is actually one of the first things that Dynare checks; it will immediately let you know if there are any problems).
\item Third, as in the preamble and everywhere along the .mod file, each line of instruction ends with a semicolon (except when a line is too long and you want to break it across two lines. This is unlike Matlab where if you break a line you need to add \ldots).
\item Fourth, equations are entered one after the other; no matrix representation is necessary. Note that variable and parameter names used in the model block must be the same as those declared in the preamble; \textsf{\textbf{TIP!}} remember that variable and parameter names are case sensitive.
\end{itemize}
\subsection{Notational conventions}
\begin{itemize}
\item Variables entering the system with a time $t$ subscript are written plainly. For example, $x_t$ would be written $x$.
\item Variables entering the system with a time $t-n$ subscript are written with $(-n)$ following them. For example, $x_{t-2}$ would be written $x(-2)$ (incidentally, this would count as two backward looking variables).
\item In the same way, variables entering the system with a time $t+n$ subscript are written with $(+n)$ following them. For example, $x_{t+2}$ would be written $x(+2)$. Writing $x(2)$ is also allowed, but this notation makes it slightly harder to count by hand the number of forward looking variables (a useful measure to check); more on this below \ldots
\end{itemize}
\subsection{Timing conventions}
\begin{itemize}
\item In Dynare, the timing of each variable reflects when that variable is decided. For instance, our capital stock is not decided today, but yesterday (recall that it is a function of yesterday's investment and capital stock); it is what we call in the jargon a \textbf{predetermined} variable. Thus, eventhough in the example presented above we wrote $k_{t+1}=i_t + (1-\delta)k_t$, as in many papers, we would translate this equation into Dynare as \texttt{k=i+(1-delta)*k(-1)}.
\item As another example, consider that in some wage negociation models, wages used during a period are set the period before. Thus, in the equation for wages, you can write wage in period $t$ (when they are set), but in the labor demand equation, wages should appear with a one period lag.
\item A slightly more roundabout way to explain the same thing is that for stock variables, you must use a ``stock at the end of the period'' concept. It is investment during period $t$ that sets stock at the end of period $t$. Be careful, a lot of papers use the ``stock at the beginning of the period'' convention, as we did (on purpose to highlight this distinction!) in the setup of the example model above.
\end{itemize}
\subsection{Conventions specifying non-predetermined variables}
\begin{itemize}
\item A (+1) next to a variable tells Dynare to count the occurrence of that variable as a jumper or forward-looking or non-predetermined variable.
\item \textbf{Blanchard-Kahn} conditions are met only if the number of non-predetermined variables equals the number of eigenvalues greater than one. If this condition is not met, Dynare will put up a warning.
\item Note that a variable may occur both as predetermined and non-predetermined. For instance, consumption could appear with a lead in the Euler equation, but also with a lag in a habit formation equation, if you had one. In this case, the second order difference equation would have two eigenvalues, one needing to be greater and the other smaller than one for stability.
\end{itemize}
\subsection{Linear and log-linearized models}
There are two other variants of the system's equations which Dynare accommodates. First, the \textbf{linear model} and second, the \textbf{model in exp-logs}. In the first case, all that is necessary is to write the term \texttt{(linear)} next to the command \texttt{model}. Our example, with just the equation for $y_l$ for illustration, would look like:\\
\\
\texttt{model (linear);\\
yy\_l=yy - ll;\\
end;}\\
\\
where repeating a letter for a variable means difference from steady state.\\
Otherwise, you may be interested to have Dynare take Taylor series expansions in logs rather than in levels; this turns out to be a very useful option when estimating models with unit roots, as we will see in chapter \ref{ch:estbase}. If so, simply rewrite your equations by taking the exponential and logarithm of each variable. The Dynare input convention makes this very easy to do. Our example would need to be re-written as follows (just shown for the first two equations)\\
\\
\texttt{model;\\
(1/exp(cc)) = beta*(1/exp(cc(+1)))*(1+exp(rr(+1))-delta);\\
psi*exp(cc)/(1-exp(ll)) = exp(ww);\\
end;}\\
\\
where, this time, repeating a letter for a variable means log of that variable, so that the level of a variable is given by $exp(repeated variable)$.
\section{Specifying steady states and/or initial values} \label{sec:ssshock}
Material in this section has created much confusion in the past. But with some attention to the explanations below, you should get through unscathed. Let's start by emphasizing the uses of this section of the .mod file. First, recall that stochastic models need to be linearized. Thus, they need to have a steady state. One of the functions of this section is indeed to provide these steady state values, or approximations of values. Second, irrespective of whether you're working with a stochastic or deterministic model, you may be interested to start your simulations or impulse response functions from either a steady state, or another given point. This section is also useful to specify this starting value. Let's see in more details how all this works.\\
In passing, though, note that the relevant commands in this section are \texttt{initval}, \texttt{endval} or, more rarely, \texttt{histval} which is covered only in the \href{http://www.dynare.org/documentation-and-support/manual}{Reference Manual}. The first two are instead covered in what follows. \\
\subsection{Stochastic models and steady states}
In a stochastic setting, your model will need to be linearized before it is solved. To do so, Dynare needs to know your model's steady state (more details on finding a steady state, as well as tips to do so more efficiently, are provided in section \ref{sec:findsteady} below). You can either enter exact steady state values into your .mod file, or just approximations and let Dynare find the exact steady state (which it will do using numerical methods based on your approximations). In either case, these values are entered in the \texttt{initval} block, as in the following fashion: \\
\\
\texttt{initval;\\
k = 9;\\
c = 0.7;\\
l = 0.3;\\
w = 2.0;\\
r = 0;\\
z = 0; \\
e = 0;\\
end;}
\\
Then, by using the command \texttt{steady}, you can control whether you want to start your simulations or impulse response functions from the steady state, or from the exact values you specified in the \texttt{initval} block. Adding \texttt{steady} just after your \texttt{initval} block will instruct Dynare to consider your initial values as mere approximations and start simulations or impulse response functions from the exact steady state. On the contrary, if you don't add the command \texttt{steady}, your simulations or impulse response functions will start from your initial values, even if Dynare will have calculated your model's exact steady state for the purpose of linearization. \\
For the case in which you would like simulations and impulse response functions to begin at the steady state, the above block would be expanded to yield:\\
\\
\texttt{initval;\\
k = 9;\\
c = 0.7;\\
l = 0.3;\\
w = 2.0;\\
r = 0;\\
z = 0; \\
e = 0;\\
end;\\
\\
steady;}
\\
\textsf{\textbf{TIP!}} If you're dealing with a stochastic model, remember that its linear approximation is good only in the vicinity of the steady state, thus it is strongly recommended that you start your simulations from a steady state; this means either using the command \texttt{steady} or entering exact steady state values. \\
\subsection{Deterministic models and initial values}
Deterministic models do not need to be linearized in order to be solved. Thus, technically, you do not need to provide a steady state for these model. But practically, most researchers are still interested to see how a model reacts to shocks when originally in steady state. In the deterministic case, the \texttt{initval} block serves very similar functions as described above. If you wanted to shock your model starting from a steady state value, you would enter approximate (or exact) steady state values in the \texttt{initval} block, followed by the command \texttt{steady}. Otherwise, if you wanted to begin your solution path from an arbitrary point, you would enter those values in your \texttt{initval} block and not use the \texttt{steady} command. An illustration of the \texttt{initval} block in the deterministic case appears further below. \\
\subsection{Finding a steady state} \label{sec:findsteady}
The difficulty in the above, of course, is calculating actual steady state values. Doing so borders on a form of art, and luck is unfortunately part of the equation. Yet, the following \textsf{\textbf{TIPS!}} may help.\\
As mentioned above, Dynare can help in finding your model's steady state by calling the appropriate Matlab functions. But it is usually only successful if the initial values you entered are close to the true steady state. If you have trouble finding the steady state of your model, you can begin by playing with the \textbf{options following the \texttt{steady} command}. These are:
\begin{itemize}
\item \texttt{solve\_algo = 0}: uses Matlab Optimization Toolbox FSOLVE
\item \texttt{solve\_algo = 1}: uses DynareÕs own nonlinear equation solver
\item \texttt{solve\_algo = 2}: splits the model into recursive blocks and solves each block in turn.
\item \texttt{solve\_algo = 3}: uses the Sims solver. This is the default option if none are specified.
\end{itemize}
For complicated models, finding suitable initial values for the endogenous variables is the trickiest part of finding the equilibrium of that model. Often, it is better to start with a smaller model and add new variables one by one.\\
But even for simpler models, you may still run into difficulties in finding your steady state. If so, another option is to \textbf{enter your model in linear terms}. In this case, variables would be expressed in percent deviations from steady state. Thus, their initial values would all be zero. Unfortunately, if any of your original (non-linear) equations involve sums (a likely fact), your linearized equations will include ratios of steady state values, which you would still need to calculate. Yet, you may be left needing to calculate fewer steady state values than in the original, non-linear, model. \\
Alternatively, you could also use an \textbf{external program to calculate exact steady state values}. For instance, you could write an external \textbf{Maple} file and then enter the steady state solution by hand in Dynare. But of course, this procedure could be time consuming and bothersome, especially if you want to alter parameter values (and thus steady states) to undertake robustness checks. \\
The alternative is to write a \textbf{Matlab} program to find your model's steady state. Doing so has the clear advantages of being able to incorporate your Matlab program directly into your .mod file so that running loops with different parameter values, for instance, becomes seamless. \textsf{\textbf{NOTE!}} When doing so, your matlab (.m) file should have the same name as your .mod file, followed by \texttt{\_steadystate} For instance, if your .mod file is called \texttt{example.mod}, your Matlab file should be called \texttt{example\_steadystate.m} and should be saved in the same directory as your .mod file. Dynare will automatically check the directory where you've saved your .mod file to see if such a Matlab file exists. If so, it will use that file to find steady state values regardless of whether you've provided initial values in your .mod file. \\
Because Matlab does not work with analytical expressions, though (unless you're working with a particular toolbox), you need to do a little work to write your steady state program. It is not enough to simply input the equations as you've written them in your .mod file and ask Matlab to solve the system. You will instead need to write your steady state program as if you were solving for the steady state by hand. That is, you need to input your expressions sequentially, whereby each left-hand side variable is written in terms of known parameters or variables already solved in the lines above. For example, the steady state file corresponding to the above example, in the stochastic case, would be: (** example file to be added shortly) \\
\subsection{Checking system stability}
\textsf{\textbf{TIP!}} A handy command that you can add after the \texttt{initval} or \texttt{endval} block (following the \texttt{steady} command if you decide to add one) is the \texttt{check} command. This \textbf{computes and displays the eigenvalues of your system} which are used in the solution method. As mentioned earlier, a necessary condition for the uniqueness of a stable equilibrium in the neighborhood of the steady state is that there are
as many eigenvalues larger than one in modulus as there are forward looking variables in the system. If this condition is not met, Dynare will tell you that the Blanchard-Kahn conditions are not satisfied (whether or not you insert the \texttt{check} command). \\
\section{Adding shocks}
\begin{comment}
\begin{figure} \label{fig:shockmodel}
\begin{center}
\includegraphics[width=1.0\textwidth]{P_ShockModel2}
\end{center}
\caption[Shock and model-type matrix]{Depending on the model type you're working with and the desired shocks, you will need to mix and match the various steady state and shock commands.}
\end{figure}
\end{comment}
\subsection{Deterministic models - temporary shocks}
When working with a deterministic model, you have the choice of introducing both temporary and permanent shocks. The distinction is that under a temporary shock, the model eventually comes back to steady state, while under a permanent shock, the model reaches a new steady state. In both cases, though, the shocks are entirely expected, as explained in our original discussion on stochastic and deterministic models. \\
To work with a \textbf{temporary shock}, you are free to set the duration and level of the shock. To specify a shock that lasts 9 periods on $z_t$, for instance, you would write:\\
\\
\texttt{shocks;\\
var z;\\
periods 1:9;\\
values 0.1;\\
end;}\\
Given the above instructions, Dynare would replace the value of $z_t$ specified in the \texttt{initval} block with the value of 0.1 entered above. If variables were in logs, this would have corresponded to a 10\% shock. Note that you can also use the \texttt{mshocks} command which multiplies the initial value of an exogenous variable by the \texttt{mshocks} value. Finally, note that we could have entered future periods in the shocks block, such as \texttt{periods 5:10}, in order to study the anticipatory behavior of agents in response to future shocks.\\
\subsection{Deterministic models - permanent shocks}
To study the effects of a \textbf{permanent shock} hitting the economy today, such as a structural change in your model, you would not specify actual ``shocks'', but would simply tell the system to which (steady state) values you would like it to move and let Dynare calculate the transition path. To do so, you would use the \texttt{endval} block following the usual \texttt{initval} block. For instance, you may specify all values to remain common between the two blocks, except for the value of technology which you may presume changes permanently. The corresponding instructions would be:\\
\\
\texttt{initval;\\
k = 9;\\
c = 0.7;\\
l = 0.3;\\
w = 2.0;\\
r = 0;\\
z = 0; \\
end;\\
steady;\\
\\
endval;\\
k = 9;\\
c = 0.7;\\
l = 0.3;\\
w = 2.0;\\
r = 0;\\
z = 0.1; \\
end;\\
steady;}\\
\\
where \texttt{steady} can also be added to the \texttt{endval} block, and serves the same functionality as described earlier (namely, of telling Dynare to start and/ or end at a steady state close to the values you entered. If you do not use \texttt{steady} after \texttt{endval}, and the latter does not list exact steady state values, you may impose on your system that it does not return to steady state. This is unusual. In this case, your problem would become a so-called two boundary problem, which, when solved, requires that the path of your endogenous variables pass through the steady state closest to your \texttt{endval} values). In our example, we make use of the second \texttt{steady} since the actual terminal steady state values are bound to be somewhat different from those entered above, which are nothing but the initial values for all variables except for technology.\\
In the above example, the value of technology would move to 0.1 in period 1 (tomorrow) and thereafter. But of course, the other variables - the endogenous variables - will take longer to reach their new steady state values. \textsf{\textbf{TIP!}} If you instead wanted to study the effects of a permanent but future shock (anticipated as usual), you would have to add a \texttt{shocks} block after the \texttt{endval} block to ``undo'' the first several periods of the permanent shock. For instance, suppose you wanted the value of technology to move to 0.1, but only in period 10. Then you would follow the above \texttt{endval} block with:\\
\\
\texttt{shocks;\\
var z;\\
periods 1:9;\\
values 0;\\
end;}\\
\subsection{Stochastic models}
Recall from our earlier description of stochastic models that shocks are only allowed to be temporary. A permanent shock cannot be accommodated due to the need to stationarize the model around a steady state. Furthermore, shocks can only hit the system today, as the expectation of future shocks must be zero. With that in mind, we can however make the effect of the shock propagate slowly throughout the economy by introducing a ``latent shock variable'' such as $e_t$ in our example, that affects the model's true exogenous variable, $z_t$ in our example, which is itself an $AR(1)$, exactly as in the model we introduced from the outset. In that case, though, we would declare $z_t$ as an endogenous variable and $e_t$ as an exogenous variable, as we did in the preamble of the .mod file in section \ref{sec:preamble}. Supposing we wanted to add a shock with variance $\sigma^2$, where $\sigma$ is determined in the preamble block, we would write: \\
\\
\texttt{shocks;\\
var e = sigma\textasciicircum 2;\\
end;}\\
\\
\textsf{\textbf{TIP!}} You can actually \textbf{mix in deterministic shocks} in stochastic models by using the commands \texttt{varexo\_det} and listing some shocks as lasting more than one period in the \texttt{shocks} block. For information on how to do so, please see the \href{http://www.dynare.org/documentation-and-support/manual}{Reference Manual}. This can be particularly useful if you're studying the effects of anticipated shocks in a stochastic model. For instance, you may be interested in what happens to your monetary model if agents began expecting higher inflation, or a depreciation of your currency. \\
\section{Selecting a computation} \label{sec:compute}
So far, we have written an instructive .mod file, but what should Dynare do with it? What are we interested in? In most cases, it will be impulse response functions (IRFs) due to the external shocks. Let's see which are the appropriate commands to give to Dynare. Again, we will distinguish between deterministic and stochastic models. \\
\subsection{For deterministic models}
In the deterministic case, all you need to do is add the command \texttt{simul} at the bottom of your .mod file. Note that the command takes the option \mbox{\texttt{[ (periods=INTEGER) ] }} The command \texttt{simul} triggers the computation a numerical simulation of the trajectory of the model's solution for the number of periods set in the option. To do so, it uses a Newton method to solve simultaneously all the equations for every period (see \citet{Juillard1996} for details). Note that unless you use the \texttt{endval} command, the algorithm makes the simplifying assumption that the system is back to equilibrium after the specified number of periods. Thus, you must specify a large enough number of periods such that increasing it further doesn't change the simulation for all practical purpose. In the case of a temporary shock, for instance, the trajectory will basicaly describe how the system gets back to equilibrium after being perturbed from the shocks you entered.\\
\subsection{For stochastic models}
In the more common case of stochastic models, the command \texttt{stoch\_simul} is appropriate. This command instructs Dynare to compute a Taylor approximation of the decision and transition functions for the model (the equations listing current values of the endogenous variables of the model as a function of the previous state of the model and current shocks), impulse response
functions and various descriptive statistics (moments, variance decomposition, correlation and autocorrelation coefficients).\footnote{For correlated shocks, the variance decomposition is computed as in the VAR literature through a Cholesky
decomposition of the covariance matrix of the exogenous variables. When the shocks are correlated, the variance
decomposition depends upon the order of the variables in the varexo command.}\\
Impulse response functions are the expected future path of the endogenous variables conditional on a shock in period 1 of one standard deviation.\textsf{\textbf{TIP!}} If you linearize your model up to a first order, impulse response functions are simply the algebraic forward iteration of your model's policy or decision rule. If you instead linearize to a second order, impulse response functions will be the result of actual Monte Carlo simulations of future shocks. This is because in second order linear equations, you will have cross terms involving the shocks, so that the effects of the shocks depend on the state of the system when the shocks hit. Thus, it is impossible to get algebraic average values of all future shocks and their impact. The technique is instead to pull future shocks from their distribution and see how they impact your system, and repeat this procedure a multitude of times in order to draw out an average response. That said, note that future shocks will not have a significant impact on your results, since they get averaged between each Monte Carlo trial and in the limit should sum to zero, given their mean of zero. Note that in the case of a second order approximation, Dynare will return the actual sample moments from the simulations. For first order linearizations, Dynare will instead report theoretical moments. In both cases, the return to steady state is asymptotic, \textsf{\textbf{TIP!}} thus you should make sure to specify sufficient periods in your IRFs such that you actually see your graphs return to steady state. Details on implementing this appear below.\\
If you're interested to peer a little further into what exactly is going on behind the scenes of Dynare's computations, have a look at Chapter \ref{ch:solbeh}. Here instead, we focus on the application of the command and reproduce below the most common options that can be added to \texttt{stoch\_simul}. For a complete list of options, please see the \href{http://www.dynare.org/documentation-and-support/manual}{Reference Manual}. \\
\textbf{Options following the \texttt{stoch\_simul} command:}
\begin{itemize}
\item ar = INTEGER: Order of autocorrelation coefficients to compute and to print (default = 5).
\item dr\_algo = 0 or 1: specifies the algorithm used for computing the quadratic approximation of the decision rules: $0$ uses a pure perturbation approach as in \citet{SchmittGrohe2004} (default) and 1 moves the point around which the Taylor expansion is computed toward the means of the distribution as in \citet{CollardJuillard2001a}.
\item drop = INTEGER: number of points dropped at the beginning of simulation before computing the summary
statistics (default = 100).
\item hp\_filter = INTEGER: uses HP filter with lambda = INTEGER before computing moments (default: no filter).
\item hp\_ngrid = INTEGER: number of points in the grid for the discreet Inverse Fast Fourier Transform used in the
HP filter computation. It may be necessary to increase it for highly autocorrelated processes (default = 512).
\item irf = INTEGER: number of periods on which to compute the IRFs (default = 40). Setting IRF=0, suppresses the
plotting of IRFÕs.
\item relative\_irf requests the computation of normalized IRFs in percentage of the standard error of each shock.
\item nocorr: doesnÕt print the correlation matrix (printing is the default).
\item nofunctions: doesnÕt print the coefficients of the approximated solution (printing is the default).
\item nomoments: doesnÕt print moments of the endogenous variables (printing them is the default).
\item noprint: cancel any printing; usefull for loops.
\item order = 1 or 2 : order of Taylor approximation (default = 2), unless you're working with a linear model in which case the order is automatically set to 1.
\item periods = INTEGER: specifies the number of periods to use in simulations (default = 0). \textsf{\textbf{TIP!}} A simulation is similar to running impulse response functions with a model linearized to the second order, in the way that both sample shocks from their distribution to see how the system reacts, but a simulation only repeats the process once, whereas impulse response functions run a multitude of Monte Carlo trials in order to get an average response of your system.
\item qz\_criterium = INTEGER or DOUBLE: value used to split stable from unstable eigenvalues in reordering the
Generalized Schur decomposition used for solving 1st order problems (default = 1.000001).
\item replic = INTEGER: number of simulated series used to compute the IRFs (default = 1 if order = 1, and 50
otherwise).
\end{itemize}
Going back to our good old example, suppose we were interested in printing all the various measures of moments of our variables, want to see impulse response functions for all variables, are basically happy with all default options and want to carry out simulations over a good number of periods. We would then end our .mod file with the following command:\\
\\
\texttt{stoch\_simul(periods=2100);}\\
\section{The complete .mod file}
For completion's sake, and for the pleasure of seeing our work bear its fruits, here are the complete .mod files corresponding to our example for the deterministic and stochastic case. You can find the corresponding files in the \textsl{models} folder under \textsl{UserGuide} in your installation of Dynare. The files are called \texttt{RBC\_Monop\_JFV.mod} for stochastic models and \texttt{RBC\_Monop\_Det.mod} for deterministic models.
\subsection{The stochastic model}
\texttt{var y c k i l y\_l w r z;\\
varexo e;\\
parameters beta psi delta alpha rho gamma sigma epsilon;\\
\\
alpha = 0.33;\\
beta = 0.99;\\
delta = 0.023;\\
psi = 1.75;\\
rho = 0.95;\\
sigma = (0.007/(1-alpha));\\
epsilon = 10;\\
\\
model;\\
(1/c) = beta*(1/c(+1))*(1+r(+1)-delta);\\
psi*c/(1-l) = w;\\
c+i = y;\\
y = (k(-1)\textasciicircum alpha)*(exp(z)*l)\textasciicircum (1-alpha);\\
w = y*((epsilon-1)/epsilon)*(1-alpha)/l;\\
r = y*((epsilon-1)/epsilon)*alpha/k(-1);\\
i = k-(1-delta)*k(-1);\\
y\_l = y/l;\\
z = rho*z(-1)+e;\\
end;\\
\\
initval;\\
k = 9;\\
c = 0.76;\\
l = 0.3;\\
w = 2.07;\\
r = 0.03;\\
z = 0;\\
e = 0;\\
end;\\
\\
steady;\\
check;\\
\\
shocks;\\
var e = sigma\textasciicircum 2;\\
end;\\
\\
stoch\_simul(periods=2100);}
\subsection{The deterministic model (case of temporary shock)}
\texttt{var y c k i l y\_l w r ;\\
varexo z;\\
parameters beta psi delta alpha sigma epsilon;\\
alpha = 0.33;\\
beta = 0.99;\\
delta = 0.023;\\
psi = 1.75;\\
sigma = (0.007/(1-alpha));\\
epsilon = 10;\\
\\
model;\\
(1/c) = beta*(1/c(+1))*(1+r(+1)-delta);\\
psi*c/(1-l) = w;\\
c+i = y;\\
y = (k(-1)\textasciicircum alpha)*(exp(z)*l)\textasciicircum (1-alpha);\\
w = y*((epsilon-1)/epsilon)*(1-alpha)/l;\\
r = y*((epsilon-1)/epsilon)*alpha/k(-1);\\
i = k-(1-delta)*k(-1);\\
y\_l = y/l;\\
end;\\
\\
initval;\\
k = 9;\\
c = 0.7;\\
l = 0.3;\\
w = 2.0;\\
r = 0;\\
z = 0; \\
end;\\
\\
steady;\\
\\
check;\\
\\
shocks;\\
var z;\\
periods 1:9;\\
values 0.1;\\
end;\\
\\
simul(periods=2100);}\\
\section{File execution and results}
To see this all come to life, let's run our .mod file, which is conveniently installed by default in the Dynare ``examples'' directory (the .mod file corresponding to the stochastic model is called RBC\_Monop\_JFV.mod and that corresponding to the deterministic model is called RBC\_Monop\_Det.mod). (** note, this may not be the case when testing the beta version of Matlab version 4) \\
\textbf{To run a .mod file}, navigate within Matlab to the directory where the example .mod files are stored. You can do this by clicking in the ``current directory'' window of Matlab, or typing the path directly in the top white field of Matlab. Once there, all you need to do is place your cursor in the Matlab command window and type, for instance, \texttt{dynare ExSolStoch;} to execute your .mod file. \\
Running these .mod files should take at most 30 seconds. As a result, you should get two forms of output - tabular in the Matlab command window and graphical in one or more pop-up windows. Let's review these results.\\
\subsection{Results - stochastic models}
\textbf{The tabular results} can be summarized as follows:
\begin{enumerate}
\item \textbf{Model summary:} a count of the various variable types in your model (endogenous, jumpers, etc...).
\item \textbf{Eigenvalues} should be displayed, and you should see a confirmation of the Blanchard-Kahn conditions if you used the command \texttt{check} in your .mod file.
\item \textbf{Matrix of covariance of exogenous shocks:} this should square with the values of the shock variances and co-variances you provided in the .mod file.
\item \textbf{Policy and transition functions:} Solving the rational exectation model, $\mathbb{E}_t[f(y_{t+1},y_t,y_{t-1},u_t)]=0$ , means finding an unkown function, $y_t = g(y_{t-1},u_t)$ that could be plugged into the original model and satisfy the implied restrictions (the first order conditions). A first order approximation of this function can be written as $y_t = \bar{y} + g_y \hat{y}_{t-1} + g_u u_t$, with $\hat{y}_t = y_t-\bar{y}$ and $\bar{y}$ being the steadystate value of $y$, and where $g_x$ is the partial derivative of the $g$ function with respect to variable $x$. In other words, the function $g$ is a time recursive (approximated) representation of the model that can generate timeseries that will approximatively satisfy the rational expectation hypothesis contained in the original model. In Dynare, the table ``Policy and Transition function'' contains the elements of $g_y$ and $g_u$. Details on the policy and transition function can be found in Chapter \ref{ch:estadv}.
\item \textbf{Moments of simulated variables:} up to the fourth moments.
\item \textbf{Correlation of simulated variables:} these are the contemporaneous correlations, presented in a table.
\item \textbf{Autocorrelation of simulated variables:} up to the fifth lag, as specified in the options of \texttt{stoch\_simul}.
\end{enumerate}
\textbf{The graphical results}, instead, show the actual impulse response functions for each of the endogenous variables, given that they actually moved. These can be especially useful in visualizing the shape of the transition functions and the extent to which each variable is affected. \textsf{\textbf{TIP!}} If some variables do not return to their steady state, either check that you have included enough periods in your simulations, or make sure that your model is stationary, i.e. that your steady state actually exists and is stable. If not, you should detrend your variables and rewrite your model in terms of those variables.
\subsection{Results - deterministic models}
Automatically displayed results are much more scarce in the case of deterministic models. If you entered \texttt{steady}, you will get a list of your steady state results. If you entered \texttt{check}, eigenvalues will also be displayed and you should receive a statement that the rank condition has been satisfied, if all goes well! Finally, you will see some intermediate output: the errors at each iteration of the Newton solver used to estimate the solution to your model. \textsf{\textbf{TIP!}} You should see these errors decrease upon each iteration; if not, your model will probably not converge. If so, you may want to try to increase the periods for the transition to the new steady state (the number of simulations periods). But more often, it may be a good idea to revise your equations. Of course, although Dynare does not display a rich set of statistics and graphs corresponding to the simulated output, it does not mean that you cannot create these by hand from Matlab. To do so, you should start by looking at section \ref{sec:FindOut} of chapter \ref{ch:soladv} on finding, saving and viewing your output.
\chapter{Solving DSGE models - Behind the scenes of Dynare} \label{ch:solbeh}
\section{Introduction}
The aim of this chapter is to peer behind the scenes of Dynare, or under its hood, to get an idea of the methodologies and algorithms used in its computations. Going into details would be beyond the scope of this User Guide which will instead remain at a high level. What you will find below will either comfort you in realizing that Dynare does what you expected of it - and what you would have also done if you had had to code it all yourself (with a little extra time on your hands!), or will spur your curiosity to have a look at more detailed material. If so, you may want to go through Michel Juillard's presentation on solving DSGE models to a first and second order (available on Michel Juillard's \href{http://jourdan.ens.fr/~michel/}{website}), or read \citet{CollardJuillard2001b} or \citet{SchmittGrohe2004} which gives a good overview of the most recent solution techniques based on perturbation methods. Finally, note that in this chapter we will focus on stochastic models - which is where the major complication lies, as explained in section \ref{sec:detstoch} of chapter \ref{ch:solbase}. For more details on the Newton-Raphson algorithm used in Dynare to solve deterministic models, see \citet{Juillard1996}. \\
\section{What is the advantage of a second order approximation?}
As noted in chapter \ref{ch:solbase} and as will become clear in the section below, linearizing a system of equations to the first order raises the issue of certainty equivalence. This is because only the first moments of the shocks enter the linearized equations, and when expectations are taken, they disappear. Thus, unconditional expectations of the endogenous variables are equal to their non-stochastic steady state values. \\
This may be an acceptable simplification to make. But depending on the context, it may instead be quite misleading. For instance, when using second order welfare functions to compare policies, you also need second order approximations of the policy function. Yet more clearly, in the case of asset pricing models, linearizing to the second order enables you to take risk (or the variance of shocks) into consideration - a highly desirable modeling feature. It is therefore very convenient that Dynare allows you to choose between a first or second order linearization of your model in the option of the \texttt{stoch\_simul} command. \\
\section{How does dynare solve stochastic DSGE models?}
In this section, we shall briefly overview the perturbation methods employed by Dynare to solve DSGE models to a first order approximation. The second order follows very much the same approach, although at a higher level of complexity. The summary below is taken mainly from Michel Juillard's presentation ``Computing first order approximations of DSGE models with Dynare'', which you should read if interested in particular details, especially regarding second order approximations (available on Michel Juillard's \href{http://jourdan.ens.fr/~michel/}{website}). \\
To summarize, a DSGE model is a collection of first order and equilibrium conditions that take the general form:
\[
\mathbb{E}_t\left\{f(y_{t+1},y_t,y_{t-1},u_t)\right\}=0
\]
\begin{eqnarray*}
\mathbb{E}(u_t) &=& 0\\
\mathbb{E}(u_t u_t') &=& \Sigma_u
\end{eqnarray*}
and where:
\begin{description}
\item[$y$]: vector of endogenous variables of any dimension
\item[$u$]: vector of exogenous stochastic shocks of any dimension
\end{description}
The solution to this system is a set of equations relating variables in the current period to the past state of the system and current shocks, that satisfy the original system. This is what we call the policy function. Sticking to the above notation, we can write this function as:
\[
y_t = g(y_{t-1},u_t)
\]
Then, it is straightforward to re-write $y_{t+1}$ as
\begin{eqnarray*}
y_{t+1} &=& g(y_t,u_{t+1})\\
&=& g(g(y_{t-1},u_t),u_{t+1})\\
\end{eqnarray*}
We can then define a new function $F$, such that:
\[
F(y_{t-1},u_t,u_{t+1}) =
f(g(g(y_{t-1},u_t),u_{t+1}),g(y_{t-1},u_t),y_{t-1},u_t)\\
\]
which enables us to rewrite our system in terms of past variables, and current and future shocks:
\[
\mathbb{E}_t\left[F(y_{t-1},u_t,u_{t+1})\right] = 0
\]
We then venture to linearize this model around a steady state defined as:
\[
f(\bar y, \bar y, \bar y, 0) = 0
\]
having the property that:
\[
\bar y = g(\bar y, 0)
\]
The first order Taylor expansion around $\bar y$ yields:
\begin{eqnarray*}
\lefteqn{\mathbb{E}_t\left\{F^{(1)}(y_{t-1},u_t,u_{t+1})\right\} =}\\
&& \mathbb{E}_t\Big[f(\bar y, \bar y, \bar y)+f_{y_+}\left(g_y\left(g_y\hat y+g_uu \right)+g_u u' \right)\\
&& + f_{y_0}\left(g_y\hat y+g_uu \right)+f_{y_-}\hat y+f_u u\Big]\\
&& = 0
\end{eqnarray*}
with $\hat y = y_{t-1} - \bar y$, $u=u_t$, $u'=u_{t+1}$, $f_{y_+}=\frac{\partial f}{\partial y_{t+1}}$, $f_{y_0}=\frac{\partial f}{\partial y_t}$, $f_{y_-}=\frac{\partial f}{\partial y_{t-1}}$, $f_{u}=\frac{\partial f}{\partial u_t}$, $g_y=\frac{\partial g}{\partial y_{t-1}}$, $g_u=\frac{\partial g}{\partial u_t}$.\\
Taking expectations (we're almost there!):
\begin{eqnarray*}
\lefteqn{\mathbb{E}_t\left\{F^{(1)}(y_{t-1},u_t, u_{t+1})\right\} =}\\
&& f(\bar y, \bar y, \bar y)+f_{y_+}\left(g_y\left(g_y\hat y+g_uu \right) \right)\\
&& + f_{y_0}\left(g_y\hat y+g_uu \right)+f_{y_-}\hat y+f_u u\Big\}\\
&=& \left(f_{y_+}g_yg_y+f_{y_0}g_y+f_{y_-}\right)\hat y+\left(f_{y_+}g_yg_u+f_{y_0}g_u+f_{u}\right)u\\
&=& 0\\
\end{eqnarray*}
As you can see, since future shocks only enter with their first moments (which are zero in expectations), they drop out when taking expectations of the linearized equations. This is technically why certainty equivalence holds in a system linearized to its first order. The second thing to note is that we have two unknown variables in the above equation: $g_y$ and $g_u$ each of which will help us recover the policy function $g$. \\
Since the above equation holds for any $\hat y$ and any $u$, each parenthesis must be null and we can solve each at a time. The first, yields a quadratic equation in $g_y$, which we can solve with a series of algebraic trics that are not all immediately apparent (but detailed in Michel Juillard's presentation). Incidentally, one of the conditions that comes out of the solution of this equation is the Blanchard-Kahn condition: there must be as many roots larger than one in modulus as there are forward-looking variables in the model. Having recovered $g_y$, recovering $g_u$ is then straightforward from the second parenthesis. \\
Finally, notice that a first order linearization of the function $g$ yields:
\[
y_t = \bar y+g_y\hat y+g_u u
\]
And now that we have $g_y$ and $g_u$, we have solved for the (approximate) policy (or decision) function and have succeeded in solving our DSGE model. If we were interested in impulse response functions, for instance, we would simply iterate the policy function starting from an initial value given by the steady state. \\
The second order solution uses the same ``perturbation methods'' as above (the notion of starting from a function you can solve - like a steady state - and iterating forward), yet applies more complex algebraic techniques to recover the various partial derivatives of the policy function. But the general approach is perfectly isomorphic. Note that in the case of a second order approximation of a DSGE model, the variance of future shocks remains after taking expectations of the linearized equations and therefore affects the level of the resulting policy function.\\
\chapter{Troubleshooting} \label{ch:trouble}
To make sure this section is as user friendly as possible, the best is to compile what users have to say! Please let me know what your most common problem is with Dynare, how Dynare tells you about it and how you solve it. Thanks for your precious help!
\ No newline at end of file
// example 1 from Collard's guide to Dynare
var y, c, k, a, h, b;
varexo e, u;
parameters beta, rho, alpha, delta, theta, psi, tau;
alpha = 0.36;
rho = 0.95;
tau = 0.025;
beta = 0.99;
delta = 0.025;
psi = 0;
theta = 2.95;
phi = 0.1;
model;
c*theta*h^(1+psi)=(1-alpha)*y;
k = beta*(((exp(b)*c)/(exp(b(+1))*c(+1)))
*(exp(b(+1))*alpha*y(+1)+(1-delta)*k));
y = exp(a)*(k(-1)^alpha)*(h^(1-alpha));
k = exp(b)*(y-c)+(1-delta)*k(-1);
a = rho*a(-1)+tau*b(-1) + e;
b = tau*a(-1)+rho*b(-1) + u;
end;
initval;
y = 1.08068253095672;
c = 0.80359242014163;
h = 0.29175631001732;
k = 5;
a = 0;
b = 0;
e = 0;
u = 0;
end;
shocks;
var e; stderr 0.009;
var u; stderr 0.009;
var e, u = phi*0.009*0.009;
end;
stoch_simul(periods=2100);
\ No newline at end of file
% Basic RBC Model with Monopolistic Competion.
%
% Jesus Fernandez-Villaverde
% Philadelphia, March 3, 2005
%----------------------------------------------------------------
% 0. Housekeeping
%----------------------------------------------------------------
close all
%----------------------------------------------------------------
% 1. Defining variables
%----------------------------------------------------------------
var y c k i l y_l w r z;
varexo e;
parameters beta psi delta alpha rho gamma sigma epsilon;
%----------------------------------------------------------------
% 2. Calibration
%----------------------------------------------------------------
alpha = 0.33;
beta = 0.99;
delta = 0.023;
psi = 1.75;
rho = 0.95;
sigma = (0.007/(1-alpha));
epsilon = 10;
%----------------------------------------------------------------
% 3. Model
%----------------------------------------------------------------
model;
(1/c) = beta*(1/c(+1))*(1+r(+1)-delta);
psi*c/(1-l) = w;
c+i = y;
y = (k(-1)^alpha)*(exp(z)*l)^(1-alpha);
w = y*((epsilon-1)/epsilon)*(1-alpha)/l;
r = y*((epsilon-1)/epsilon)*alpha/k;
i = k-(1-delta)*k(-1);
y_l = y/l;
z = rho*z(-1)+e;
end;
%----------------------------------------------------------------
% 4. Computation
%----------------------------------------------------------------
initval;
k = 9;
c = 0.76;
l = 0.3;
w = 2.07;
r = 0.03;
z = 0;
e = 0;
end;
shocks;
var e = sigma^2;
end;
steady;
stoch_simul(periods=1000,irf=0);
datatomfile('simuldataRBC',[]);
return;
\ No newline at end of file
var y c k i l y_l w r z;
varexo e;
parameters beta psi delta alpha rho epsilon;
model;
(1/c) = beta*(1/c(+1))*(1+r(+1)-delta);
psi*c/(1-l) = w;
c+i = y;
y = (k(-1)^alpha)*(exp(z)*l)^(1-alpha);
w = y*((epsilon-1)/epsilon)*(1-alpha)/l;
r = y*((epsilon-1)/epsilon)*alpha/k(-1);
i = k-(1-delta)*k(-1);
y_l = y/l;
z = rho*z(-1)+e;
end;
varobs y;
initval;
k = 9;
c = 0.76;
l = 0.3;
w = 2.07;
r = 0.03;
z = 0;
e = 0;
end;
estimated_params;
alpha, beta_pdf, 0.35, 0.02;
beta, beta_pdf, 0.99, 0.002;
delta, beta_pdf, 0.025, 0.003;
psi, gamma_pdf, 1.75, 0.1;
rho, beta_pdf, 0.95, 0.05;
epsilon, gamma_pdf, 10, 0.5;
stderr e, inv_gamma_pdf, 0.01, inf;
end;
estimation(datafile=simuldataRBC,nobs=200,first_obs=500,mh_replic=2000,mh_nblocks=2,mh_drop=0.45,mh_jscale=0.8,mode_compute=4);
var y c k i l y_l w r ;
varexo z;
parameters beta psi delta alpha sigma epsilon;
alpha = 0.33;
beta = 0.99;
delta = 0.023;
psi = 1.75;
sigma = (0.007/(1-alpha));
epsilon = 10;
model;
(1/c) = beta*(1/c(+1))*(1+r(+1)-delta);
psi*c/(1-l) = w;
c+i = y;
y = (k(-1)^alpha)*(exp(z)*l)^(1-alpha);
w = y*((epsilon-1)/epsilon)*(1-alpha)/l;
r = y*((epsilon-1)/epsilon)*alpha/k(-1);
i = k-(1-delta)*k(-1);
y_l = y/l;
end;
initval;
k = 9;
c = 0.7;
l = 0.3;
w = 2.0;
r = 0;
z = 0;
end;
steady;
check;
shocks;
var z;
periods 1:9;
values 0.1;
end;
simul(periods=2100);
\ No newline at end of file
// Adapted from Jesus Fernandez-Villaverde, Basic RBC Model with Monopolistic Competion Philadelphia, March 3, 2005
var y c k i l y_l w r z;
varexo e;
parameters beta psi delta alpha rho gamma sigma epsilon;
alpha = 0.33;
beta = 0.99;
delta = 0.023;
psi = 1.75;
rho = 0.95;
sigma = (0.007/(1-alpha));
epsilon = 10;
model;
(1/c) = beta*(1/c(+1))*(1+r(+1)-delta);
psi*c/(1-l) = w;
c+i = y;
y = (k(-1)^alpha)*(exp(z)*l)^(1-alpha);
w = y*((epsilon-1)/epsilon)*(1-alpha)/l;
r = y*((epsilon-1)/epsilon)*alpha/k(-1);
i = k-(1-delta)*k(-1);
y_l = y/l;
z = rho*z(-1)+e;
end;
initval;
k = 9;
c = 0.76;
l = 0.3;
w = 2.07;
r = 0.03;
z = 0;
e = 0;
end;
steady;
check;
shocks;
var e = sigma^2;
end;
stoch_simul(periods=2100);
// This file replicates the estimation of the CIA model from
// Frank Schorfheide (2000) "Loss function-based evaluation of DSGE models"
// Journal of Applied Econometrics, 15, 645-670.
// the data are the ones provided on Schorfheide's web site with the programs.
// http://www.econ.upenn.edu/~schorf/programs/dsgesel.ZIP
// You need to have fsdat.m in the same directory as this file.
// This file replicates:
// -the posterior mode as computed by Frank's Gauss programs
// -the parameter mean posterior estimates reported in the paper
// -the model probability (harmonic mean) reported in the paper
// This file was tested with dyn_mat_test_0218.zip
// the smooth shocks are probably stil buggy
//
// The equations are taken from J. Nason and T. Cogley (1994)
// "Testing the implications of long-run neutrality for monetary business
// cycle models" Journal of Applied Econometrics, 9, S37-S70.
// Note that there is an initial minus sign missing in equation (A1), p. S63.
//
// Michel Juillard, February 2004
var m P c e W R k d n l Y_obs P_obs y dA;
varexo e_a e_m;
parameters alp bet gam mst rho psi del;
model;
dA = exp(gam+e_a);
log(m) = (1-rho)*log(mst) + rho*log(m(-1))+e_m;
-P/(c(+1)*P(+1)*m)+bet*P(+1)*(alp*exp(-alp*(gam+log(e(+1))))*k^(alp-1)*n(+1)^(1-alp)+(1-del)*exp(-(gam+log(e(+1)))))/(c(+2)*P(+2)*m(+1))=0;
W = l/n;
-(psi/(1-psi))*(c*P/(1-n))+l/n = 0;
R = P*(1-alp)*exp(-alp*(gam+e_a))*k(-1)^alp*n^(-alp)/W;
1/(c*P)-bet*P*(1-alp)*exp(-alp*(gam+e_a))*k(-1)^alp*n^(1-alp)/(m*l*c(+1)*P(+1)) = 0;
c+k = exp(-alp*(gam+e_a))*k(-1)^alp*n^(1-alp)+(1-del)*exp(-(gam+e_a))*k(-1);
P*c = m;
m-1+d = l;
e = exp(e_a);
y = k(-1)^alp*n^(1-alp)*exp(-alp*(gam+e_a));
Y_obs/Y_obs(-1) = dA*y/y(-1);
P_obs/P_obs(-1) = (P/P(-1))*m(-1)/dA;
end;
varobs P_obs Y_obs;
observation_trends;
P_obs (log(mst)-gam);
Y_obs (gam);
end;
initval;
k = 6;
m = mst;
P = 2.25;
c = 0.45;
e = 1;
W = 4;
R = 1.02;
d = 0.85;
n = 0.19;
l = 0.86;
y = 0.6;
dA = exp(gam);
end;
// the above is really only useful if you want to do a stoch_simul
// of your model, since the estimation will use the Matlab
// steady state file also provided and discussed above.
estimated_params;
alp, beta_pdf, 0.356, 0.02;
bet, beta_pdf, 0.993, 0.002;
gam, normal_pdf, 0.0085, 0.003;
mst, normal_pdf, 1.0002, 0.007;
rho, beta_pdf, 0.129, 0.223;
psi, beta_pdf, 0.65, 0.05;
del, beta_pdf, 0.01, 0.005;
stderr e_a, inv_gamma_pdf, 0.035449, inf;
stderr e_m, inv_gamma_pdf, 0.008862, inf;
end;
estimation(datafile=fsdat,nobs=192,loglinear,mh_replic=2000,
mode_compute=4,mh_nblocks=2,mh_drop=0.45,mh_jscale=0.65,diffuse_filter);
\ No newline at end of file
% computes the steady state of fs2000 analyticaly
% largely inspired by the program of F. Schorfheide
function [ys,check] = fs2000ns_steadystate(ys,exe)
global M_
alp = M_.params(1);
bet = M_.params(2);
gam = M_.params(3);
mst = M_.params(4);
rho = M_.params(5);
psi = M_.params(6);
del = M_.params(7);
check = 0;
dA = exp(gam);
gst = 1/dA;
m = mst;
khst = ( (1-gst*bet*(1-del)) / (alp*gst^alp*bet) )^(1/(alp-1));
xist = ( ((khst*gst)^alp - (1-gst*(1-del))*khst)/mst )^(-1);
nust = psi*mst^2/( (1-alp)*(1-psi)*bet*gst^alp*khst^alp );
n = xist/(nust+xist);
P = xist + nust;
k = khst*n;
l = psi*mst*n/( (1-psi)*(1-n) );
c = mst/P;
d = l - mst + 1;
y = k^alp*n^(1-alp)*gst^alp;
R = mst/bet;
W = l/n;
ist = y-c;
q = 1 - d;
e = 1;
P_obs = 1;
Y_obs = 1;
ys =[
m
P
c
e
W
R
k
d
n
l
Y_obs
P_obs
y
dA ];
\ No newline at end of file
data_q = [
18.02 1474.5 150.2
17.94 1538.2 150.9
18.01 1584.5 151.4
18.42 1644.1 152
18.73 1678.6 152.7
19.46 1693.1 153.3
19.55 1724 153.9
19.56 1758.2 154.7
19.79 1760.6 155.4
19.77 1779.2 156
19.82 1778.8 156.6
20.03 1790.9 157.3
20.12 1846 158
20.1 1882.6 158.6
20.14 1897.3 159.2
20.22 1887.4 160
20.27 1858.2 160.7
20.34 1849.9 161.4
20.39 1848.5 162
20.42 1868.9 162.8
20.47 1905.6 163.6
20.56 1959.6 164.3
20.62 1994.4 164.9
20.78 2020.1 165.7
21 2030.5 166.5
21.2 2023.6 167.2
21.33 2037.7 167.9
21.62 2033.4 168.7
21.71 2066.2 169.5
22.01 2077.5 170.2
22.15 2071.9 170.9
22.27 2094 171.7
22.29 2070.8 172.5
22.56 2012.6 173.1
22.64 2024.7 173.8
22.77 2072.3 174.5
22.88 2120.6 175.3
22.92 2165 176.045
22.91 2223.3 176.727
22.94 2221.4 177.481
23.03 2230.95 178.268
23.13 2279.22 179.694
23.22 2265.48 180.335
23.32 2268.29 181.094
23.4 2238.57 181.915
23.45 2251.68 182.634
23.51 2292.02 183.337
23.56 2332.61 184.103
23.63 2381.01 184.894
23.75 2422.59 185.553
23.81 2448.01 186.203
23.87 2471.86 186.926
23.94 2476.67 187.68
24 2508.7 188.299
24.07 2538.05 188.906
24.12 2586.26 189.631
24.29 2604.62 190.362
24.35 2666.69 190.954
24.41 2697.54 191.56
24.52 2729.63 192.256
24.64 2739.75 192.938
24.77 2808.88 193.467
24.88 2846.34 193.994
25.01 2898.79 194.647
25.17 2970.48 195.279
25.32 3042.35 195.763
25.53 3055.53 196.277
25.79 3076.51 196.877
26.02 3102.36 197.481
26.14 3127.15 197.967
26.31 3129.53 198.455
26.6 3154.19 199.012
26.9 3177.98 199.572
27.21 3236.18 199.995
27.49 3292.07 200.452
27.75 3316.11 200.997
28.12 3331.22 201.538
28.39 3381.86 201.955
28.73 3390.23 202.419
29.14 3409.65 202.986
29.51 3392.6 203.584
29.94 3386.49 204.086
30.36 3391.61 204.721
30.61 3422.95 205.419
31.02 3389.36 206.13
31.5 3481.4 206.763
31.93 3500.95 207.362
32.27 3523.8 208
32.54 3533.79 208.642
33.02 3604.73 209.142
33.2 3687.9 209.637
33.49 3726.18 210.181
33.95 3790.44 210.737
34.36 3892.22 211.192
34.94 3919.01 211.663
35.61 3907.08 212.191
36.29 3947.11 212.708
37.01 3908.15 213.144
37.79 3922.57 213.602
38.96 3879.98 214.147
40.13 3854.13 214.7
41.05 3800.93 215.135
41.66 3835.21 215.652
42.41 3907.02 216.289
43.19 3952.48 216.848
43.69 4044.59 217.314
44.15 4072.19 217.776
44.77 4088.49 218.338
45.57 4126.39 218.917
46.32 4176.28 219.427
47.07 4260.08 219.956
47.66 4329.46 220.573
48.63 4328.33 221.201
49.42 4345.51 221.719
50.41 4510.73 222.281
51.27 4552.14 222.933
52.35 4603.65 223.583
53.51 4605.65 224.152
54.65 4615.64 224.737
55.82 4644.93 225.418
56.92 4656.23 226.117
58.18 4678.96 226.754
59.55 4566.62 227.389
61.01 4562.25 228.07
62.59 4651.86 228.689
64.15 4739.16 229.155
65.37 4696.82 229.674
66.65 4753.02 230.301
67.87 4693.76 230.903
68.86 4615.89 231.395
69.72 4634.88 231.906
70.66 4612.08 232.498
71.44 4618.26 233.074
72.08 4662.97 233.546
72.83 4763.57 234.028
73.48 4849 234.603
74.19 4939.23 235.153
75.02 5053.56 235.605
75.58 5132.87 236.082
76.25 5170.34 236.657
76.81 5203.68 237.232
77.63 5257.26 237.673
78.25 5283.73 238.176
78.76 5359.6 238.789
79.45 5393.57 239.387
79.81 5460.83 239.861
80.22 5466.95 240.368
80.84 5496.29 240.962
81.45 5526.77 241.539
82.09 5561.8 242.009
82.68 5618 242.52
83.33 5667.39 243.12
84.09 5750.57 243.721
84.67 5785.29 244.208
85.56 5844.05 244.716
86.66 5878.7 245.354
87.44 5952.83 245.966
88.45 6010.96 246.46
89.39 6055.61 247.017
90.13 6087.96 247.698
90.88 6093.51 248.374
92 6152.59 248.928
93.18 6171.57 249.564
94.14 6142.1 250.299
95.11 6078.96 251.031
96.27 6047.49 251.65
97 6074.66 252.295
97.7 6090.14 253.033
98.31 6105.25 253.743
99.13 6175.69 254.338
99.79 6214.22 255.032
100.17 6260.74 255.815
100.88 6327.12 256.543
101.84 6327.93 257.151
102.35 6359.9 257.785
102.83 6393.5 258.516
103.51 6476.86 259.191
104.13 6524.5 259.738
104.71 6600.31 260.351
105.39 6629.47 261.04
106.09 6688.61 261.692
106.75 6717.46 262.236
107.24 6724.2 262.847
107.75 6779.53 263.527
108.29 6825.8 264.169
108.91 6882 264.681
109.24 6983.91 265.258
109.74 7020 265.887
110.23 7093.12 266.491
111 7166.68 266.987
111.43 7236.5 267.545
111.76 7311.24 268.171
112.08 7364.63 268.815
];
%GDPD GDPQ GPOP
series = zeros(193,2);
series(:,2) = data_q(:,1);
series(:,1) = 1000*data_q(:,2)./data_q(:,3);
Y_obs = series(:,1);
P_obs = series(:,2);
series = series(2:193,:)./series(1:192,:);
gy_obs = series(:,1);
gp_obs = series(:,2);
ti = [1950:0.25:1997.75];
\ No newline at end of file
SUBDIRS = sylv parser/cc tl doc utils/cc integ kord src
EXTRA_DIST = change_log.html c++lib.w tests extern
@q This file defines standard C++ namespaces and classes @>
@q Please send corrections to saroj-tamasa@@worldnet.att.net @>
@s std int
@s rel_ops int
@s bitset int
@s char_traits int
@s deque int
@s list int
@s map int
@s multimap int
@s multiset int
@s pair int
@s set int
@s stack int
@s exception int
@s logic_error int
@s runtime_error int
@s domain_error int
@s invalid_argument int
@s length_error int
@s out_of_range int
@s range_error int
@s overflow_error int
@s underflow_error int
@s back_insert_iterator int
@s front_insert_iterator int
@s insert_iterator int
@s reverse_iterator int
@s istream_iterator int
@s ostream_iterator int
@s istreambuf_iterator int
@s ostreambuf_iterator int
@s iterator_traits int
@s queue int
@s vector int
@s basic_string int
@s string int
@s auto_ptr int
@s valarray int
@s ios_base int
@s basic_ios int
@s basic_streambuf int
@s basic_istream int
@s basic_ostream int
@s basic_iostream int
@s basic_stringbuf int
@s basic_istringstream int
@s basic_ostringstream int
@s basic_stringstream int
@s basic_filebuf int
@s basic_ifstream int
@s basic_ofstream int
@s basic_fstream int
@s ctype int
@s collate int
@s collate_byname int
@s streambuf int
@s istream int
@s ostream int
@s iostream int
@s stringbuf int
@s istringstream int
@s ostringstream int
@s stringstream int
@s filebuf int
@s ifstream int
@s ofstream int
@s fstream int
@s wstreambuf int
@s wistream int
@s wostream int
@s wiostram int
@s wstringbuf int
@s wistringstream int
@s wostringstream int
@s wstringstream int
@s wfilebuf int
@s wifstream int
@s wofstream int
@s wfstream int
@s streamoff int
@s streamsize int
@s fpos int
@s streampos int
@s wstreampos int
<HTML>
<TITLE>
Dynare++ Change Log
</TITLE>
<!-- $Header$ -->
<BODY>
<TABLE CELLSPACING=2 ALIGN="CENTER" BORDER=1>
<TR>
<TD BGCOLOR="#d0d0d0" WIDTH="85"> <b>Revision</b> </TD>
<TD BGCOLOR="#d0d0d0" WIDTH="85"> <b>Version</b></TD>
<TD BGCOLOR="#d0d0d0" WIDTH="80"> <b>Date</b> </TD>
<TD BGCOLOR="#d0d0d0" WIDTH="600"> <b>Description of changes</b></TD>
</TR>
<TR>
<TD>
<TD>1.3.7
<TD>2008/01/15
<TD>
<TR><TD><TD><TD> <TD> Corrected a serious bug in centralizing a
decision rule. This bug implies that all results based on simulations
of the decision rule were wrong. However results based on stochastic
fix points were correct. Thanks to Wouter J. den Haan and Joris de Wind!
<TR><TD><TD><TD> <TD> Added options --centralize and --no-centralize.
<TR><TD><TD><TD> <TD> Corrected an error of a wrong
variance-covariance matrix in real-time simulations (thanks to Pawel
Zabzcyk).
<TR><TD><TD><TD> <TD> Corrected a bug of integer overflow in refined
faa Di Bruno formula if one of refinements is empty. This bug appeared
when solving models without forward looking variables.
<TR><TD><TD><TD> <TD> Corrected a bug in the Sylvester equation
formerly working only for models with forward looking variables.
<TR><TD><TD><TD> <TD> Corrected a bug in global check printout.
<TR><TD><TD><TD> <TD> Added generating a dump file.
<TR><TD><TD><TD> <TD> Fixed a bug of forgetting repeated assignments
(for example in parameter settings and initval).
<TR><TD><TD><TD> <TD> Added a diff operator to the parser.
<TR>
<TD>1539
<TD>1.3.6
<TD>2008/01/03
<TD>
<TR><TD><TD><TD> <TD> Corrected a bug of segmentation faults for long
names and path names.
<TR><TD><TD><TD> <TD> Changed a way how random numbers are
generated. Dynare++ uses a separate instance of Mersenne twister for
each simulation, this corrects a flaw of additional randomness caused
by operating system scheduler. This also corrects a strange behaviour
of random generator on Windows, where each simulation was getting the
same sequence of random numbers.
<TR><TD><TD><TD> <TD> Added calculation of conditional distributions
controlled by --condper and --condsim.
<TR><TD><TD><TD> <TD> Dropped creating unfoled version of decision
rule at the end. This might consume a lot of memory. However,
simulations might be slower for some models.
<TR>
<TD>1368
<TD>1.3.5
<TD>2007/07/11
<TD>
<TR><TD><TD><TD> <TD> Corrected a bug of useless storing all derivative
indices in a parser. This consumed a lot of memory for large models.
<TR><TD><TD><TD> <TD> Added an option <tt>--ss-tol</tt> controlling a
tolerance used for convergence of a non-linear solver.
<TR><TD><TD><TD> <TD> Corrected buggy interaction of optimal policy
and forward looking variables with more than one period.
<TR><TD><TD><TD> <TD> Variance matrices can be positive
semidefinite. This corrects a bug of throwing an error if estimating
approximation errors on ellipse of the state space with a
deterministic variable.
<TR><TD><TD><TD> <TD> Implemented simulations with statistics
calculated in real-time. Options <tt>--rtsim</tt> and <tt>--rtper</tt>.
<TR>
<TD>1282
<TD>1.3.4
<TD>2007/05/15
<TD>
<TR><TD><TD><TD> <TD>Corrected a bug of wrong representation of NaN in generated M-files.
<TR><TD><TD><TD> <TD>Corrected a bug of occassionaly wrong evaluation of higher order derivatives of integer powers.
<TR><TD><TD><TD> <TD>Implemented automatic handling of terms involving multiple leads.
<TR><TD><TD><TD> <TD>Corrected a bug in the numerical integration, i.e. checking of the precision of the solution.
<TR>
<TD>1090
<TD>1.3.3
<TD>2006/11/20
<TD>
<TR><TD><TD><TD> <TD>Corrected a bug of non-registering an auxiliary variable in initval assignments.
<TR>
<TD>988
<TD>1.3.2
<TD>2006/10/11
<TD>
<TR><TD><TD><TD> <TD>Corrected a few not-serious bugs: segfault on
some exception, error in parsing large files, error in parsing
matrices with comments, a bug in dynare_simul.m
<TR><TD><TD><TD> <TD>Added posibility to specify a list of shocks for
which IRFs are calculated
<TR><TD><TD><TD> <TD>Added --order command line switch
<TR><TD><TD><TD> <TD>Added writing two Matlab files for steady state
calcs
<TR><TD><TD><TD> <TD>Implemented optimal policy using keyword
planner_objective and planner_discount
<TR><TD><TD><TD> <TD>Implemented an R interface to Dynare++ algorithms
(Tamas Papp)
<TR><TD><TD><TD> <TD>Highlevel code reengineered to allow for
different model inputs
<TR>
<TD>799
<TD>1.3.1
<TD>2006/06/13
<TD>
<TR><TD><TD><TD> <TD>Corrected few bugs: in error functions, in linear algebra module.
<TR><TD><TD><TD> <TD>Updated dynare_simul.
<TR><TD><TD><TD> <TD>Updated the tutorial.
<TR><TD><TD><TD> <TD>Corrected an error in summing up tensors where
setting up the decision rule derivatives. Thanks to Michel
Juillard. The previous version was making deterministic effects of
future volatility smaller than they should be.
<TR>
<TD>766
<TD>1.3.0
<TD>2006/05/22
<TD>
<TR><TD><TD><TD> <TD>The non-linear solver replaced with a new one.
<TR><TD><TD><TD> <TD>The parser and derivator replaced with a new
code. Now it is possible to put expressions in parameters and initval
sections.
<TR>
<TD>752
<TD>1.2.2
<TD>2006/05/22
<TD>
<TR><TD><TD><TD> <TD>Added an option triggering/suppressing IRF calcs..
<TR><TD><TD><TD> <TD>Newton algortihm is now used for fix-point calculations.
<TR><TD><TD><TD> <TD> Vertical narrowing of tensors in Faa Di Bruno
formula to avoid multiplication with zeros..
<TR>
<TD>436
<TD>1.2.1
<TD>2005/08/17
<TD>
<TR><TD><TD><TD> <TD>Faa Di Bruno for sparse matrices optimized. The
implementation now accommodates vertical refinement of function stack
in order to fit a corresponding slice to available memory. In
addition, zero slices are identified. For some problems, this implies
significant speedup.
<TR><TD><TD><TD> <TD>Analytic derivator speedup.
<TR><TD><TD><TD> <TD>Corrected a bug in the threading code. The bug
stayed concealed in Linux 2.4.* kernels, and exhibited in Linux 2.6.*,
which has a different scheduling. This correction also allows using
detached threads on Windows.
<TR>
<TD>410
<TD>1.2
<TD>2005/07/29
<TD>
<TR><TD><TD><TD> <TD>Added Dynare++ tutorial.
<TR><TD><TD><TD> <TD>Changed and enriched contents of MAT-4 output
file.
<TR><TD><TD><TD> <TD>Corrected a bug of wrong variable indexation
resulting in an exception. The error occurred if a variable appeared
at time t-1 or t+1 and not at t.
<TR><TD><TD><TD> <TD>Added Matlab interface, which allows simulation
of a decision rule in Matlab.
<TR><TD><TD><TD> <TD>Got rid of Matrix Template Library.
<TR><TD><TD><TD> <TD>Added checking of model residuals by the
numerical integration. Three methods: checking along simulation path,
checking along shocks, and on ellipse of states.
<TR><TD><TD><TD> <TD>Corrected a bug in calculation of higher moments
of Normal dist.
<TR><TD><TD><TD> <TD>Corrected a bug of wrong drawing from Normal dist
with non-zero covariances.
<TR><TD><TD><TD>
<TD>Added numerical integration module. Product and Smolyak
quadratures over Gauss-Hermite and Gauss-Legendre, and quasi Monte
Carlo.
<TR>
<TD>152
<TD>1.1
<TD>2005/04/22
<TD>
<TR><TD><TD><TD>
<TD>Added a calculation of approximation at a stochastic steady state
(still experimental).
<TR><TD><TD><TD>
<TD>Corrected a bug in Cholesky decomposition of variance-covariance
matrix with off-diagonal elements.
<TR>
<TD>89
<TD>1.01
<TD>2005/02/23
<TD>
<TR><TD><TD><TD>
<TD>Added version printout.
<TR><TD><TD><TD>
<TD>Corrected the bug of multithreading support for P4 HT processors running on Win32.
<TR><TD><TD><TD>
<TD>Enhanced Kronecker product code resulting in approx. 20% speedup.
<TR><TD><TD><TD>
<TD>Implemented vertical stack container refinement, and another
method for sparse folded Faa Di Bruno (both not used yet).
<TR>
<TD>5
<TD>1.0
<TD>2005/02/23
<TD>The first released version.
</TABLE>
</BODY>
</HTML>
EXTRA_DIST = dynare++-ramsey.tex dynare++-tutorial.tex
if HAVE_PDFLATEX
pdf-local: dynare++-ramsey.pdf dynare++-tutorial.pdf
endif
%.pdf: %.tex
$(PDFLATEX) $<
$(PDFLATEX) $<
$(PDFLATEX) $<
CLEANFILES = *.pdf *.log *.aux *.out *.toc
\documentclass[10pt]{article}
\usepackage{array,natbib,times}
\usepackage{amsmath, amsthm, amssymb}
%\usepackage[pdftex,colorlinks]{hyperref}
\begin{document}
\title{Implementation of Ramsey Optimal Policy in Dynare++, Timeless Perspective}
\author{Ondra Kamen\'\i k}
\date{June 2006}
\maketitle
\textbf{Abstract:} This document provides a derivation of Ramsey
optimal policy from timeless perspective and describes its
implementation in Dynare++.
\section{Derivation of the First Order Conditions}
Let us start with an economy populated by agents who take a number of
variables exogenously, or given. These may include taxes or interest
rates for example. These variables can be understood as decision (or control)
variables of the timeless Ramsey policy (or social planner). The agent's
information set at time $t$ includes mass-point distributions of these
variables for all times after $t$. If $i_t$ denotes an interest rate
for example, then the information set $I_t$ includes
$i_{t|t},i_{t+1|t},\ldots,i_{t+k|t},\ldots$ as numbers. In addition
the information set includes all realizations of past exogenous
innovations $u_\tau$ for $\tau=t,t-1,\ldots$ and distibutions
$u_\tau\sim N(0,\Sigma)$ for $\tau=t+1,\ldots$. These information sets will be denoted $I_t$.
An information set including only the information on past realizations
of $u_\tau$ and future distributions of $u_\tau\sim N(0\sigma)$ will
be denoted $J_t$. We will use the following notation for expectations
through these sets:
\begin{eqnarray*}
E^I_t[X] &=& E(X|I_t)\\
E^J_t[X] &=& E(X|J_t)
\end{eqnarray*}
The agents optimize taking the decision variables of the social
planner at $t$ and future as given. This means that all expectations
they form are conditioned on the set $I_t$. Let $y_t$ denote a vector
of all endogenous variables including the planer's decision
variables. Let the number of endogenous variables be $n$. The economy
can be described by $m$ equations including the first order conditions
and transition equations:
\begin{equation}\label{constr}
E_t^I\left[f(y_{t-1},y_t,y_{t+1},u_t)\right] = 0.
\end{equation}
This lefts $n-m$
the planner's control variables. The solution of this problem is a
decision rule of the form:
\begin{equation}\label{agent_dr}
y_t=g(y_{t-1},u_t,c_{t|t},c_{t+1|t},\ldots,c_{t+k|t},\ldots),
\end{equation}
where $c$ is a vector of planner's control variables.
Each period the social planner chooses the vector $c_t$ to maximize
his objective such that \eqref{agent_dr} holds for all times following
$t$. This would lead to $n-m$ first order conditions with respect to
$c_t$. These first order conditions would contain unknown derivatives
of endogenous variables with respect to $c$, which would have to be
retrieved from the implicit constraints \eqref{constr} since the
explicit form \eqref{agent_dr} is not known.
The other way to proceed is to assume that the planner is so dumb that
he is not sure what are his control variables. So he optimizes with
respect to all $y_t$ given the constraints \eqref{constr}. If the
planner's objective is $b(y_{t-1},y_t,y_{t+1},u_t)$ with a discount rate
$\beta$, then the optimization problem looks as follows:
\begin{align}
\max_{\left\{y_\tau\right\}^\infty_t}&E_t^J
\left[\sum_{\tau=t}^\infty\beta^{\tau-t}b(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right]\notag\\
&\rm{s.t.}\label{planner_optim}\\
&\hskip1cm E^I_\tau\left[f(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right]=0\quad\rm{for\ }
\tau=\ldots,t-1,t,t+1,\ldots\notag
\end{align}
Note two things: First, each constraint \eqref{constr} in
\eqref{planner_optim} is conditioned on $I_\tau$ not $I_t$. This is
very important, since the behaviour of agents at period $\tau=t+k$ is
governed by the constraint using expectations conditioned on $t+k$,
not $t$. The social planner knows that at $t+k$ the agents will use
all information available at $t+k$. Second, the constraints for the
planner's decision made at $t$ include also constraints for agent's
behaviour prior to $t$. This is because the agent's decision rules are
given in the implicit form \eqref{constr} and not in the explicit form
\eqref{agent_dr}.
Using Lagrange multipliers, this can be rewritten as
\begin{align}
\max_{y_t}E_t^J&\left[\sum_{\tau=t}^\infty\beta^{\tau-t}b(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right.\notag\\
&\left.+\sum_{\tau=-\infty}^{\infty}\beta^{\tau-t}\lambda^T_\tau E_\tau^I\left[f(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right]\right],
\label{planner_optim_l}
\end{align}
where $\lambda_t$ is a vector of Lagrange multipliers corresponding to
constraints \eqref{constr}. Note that the multipliers are multiplied
by powers of $\beta$ in order to make them stationary. Taking a
derivative wrt $y_t$ and putting it to zero yields the first order
conditions of the planner's problem:
\begin{align}
E^J_t\left[\vphantom{\frac{\int^(_)}{\int^(\_)}}\right.&\frac{\partial}{\partial y_t}b(y_{t-1},y_t,y_{t+1},u_t)+
\beta L^{+1}\frac{\partial}{\partial y_{t-1}}b(y_{t-1},y_t,y_{t+1},u_t)\notag\\
&+\beta^{-1}\lambda_{t-1}^TE^I_{t-1}\left[L^{-1}\frac{\partial}{\partial y_{t+1}}f(y_{t-1},y_t,y_{t+1},u_t)\right]\notag\\
&+\lambda_t^TE^I_t\left[\frac{\partial}{\partial y_{t}}f(y_{t-1},y_t,y_{t+1},u_t)\right]\notag\\
&+\beta\lambda_{t+1}^TE^I_{t+1}\left[L^{+1}\frac{\partial}{\partial y_{t-1}}f(y_{t-1},y_t,y_{t+1},u_t)\right]
\left.\vphantom{\frac{\int^(_)}{\int^(\_)}}\right]
= 0,\label{planner_optim_foc}
\end{align}
where $L^{+1}$ and $L^{-1}$ are one period lead and lag operators respectively.
Now we have to make a few assertions concerning expectations
conditioned on the different information sets to simplify
\eqref{planner_optim_foc}. Recall the formula for integration through
information on which another expectation is conditioned, this is:
$$E\left[E\left[u|v\right]\right] = E[u],$$
where the outer expectation integrates through $v$. Since $J_t\subset
I_t$, by easy application of the above formula we obtain
\begin{eqnarray}
E^J_t\left[E^I_t\left[X\right]\right] &=& E^J_t\left[X\right]\quad\rm{and}\notag\\
E^J_t\left[E^I_{t-1}\left[X\right]\right] &=& E^J_t\left[X\right]\label{e_iden}\\
E^J_t\left[E^I_{t+1}\left[X\right]\right] &=& E^J_{t+1}\left[X\right]\notag
\end{eqnarray}
Now, the last term of \eqref{planner_optim_foc} needs a special
attention. It is equal to
$E^J_t\left[\beta\lambda^T_{t+1}E^I_{t+1}[X]\right]$. If we assume
that the problem \eqref{planner_optim} has a solution, then there is a
deterministic function from $J_{t+1}$ to $\lambda_{t+1}$ and so
$\lambda_{t+1}\in J_{t+1}\subset I_{t+1}$. And the last term is equal
to $E^J_{t}\left[E^I_{t+1}[\beta\lambda^T_{t+1}X]\right]$, which is
$E^J_{t+1}\left[\beta\lambda^T_{t+1}X\right]$. This term can be
equivalently written as
$E^J_{t}\left[\beta\lambda^T_{t+1}E^J_{t+1}[X]\right]$. The reason why
we write the term in this way will be clear later. All in all, we have
\begin{align}
E^J_t\left[\vphantom{\frac{\int^(_)}{\int^(\_)}}\right.&\frac{\partial}{\partial y_t}b(y_{t-1},y_t,y_{t+1},u_t)+
\beta L^{+1}\frac{\partial}{\partial y_{t-1}}b(y_{t-1},y_t,y_{t+1},u_t)\notag\\
&+\beta^{-1}\lambda_{t-1}^TL^{-1}\frac{\partial}{\partial y_{t+1}}f(y_{t-1},y_t,y_{t+1},u_t)\notag\\
&+\lambda_t^T\frac{\partial}{\partial y_{t}}f(y_{t-1},y_t,y_{t+1},u_t)\notag\\
&+\beta\lambda_{t+1}^TE^J_{t+1}\left[L^{+1}\frac{\partial}{\partial y_{t-1}}f(y_{t-1},y_t,y_{t+1},u_t)\right]
\left.\vphantom{\frac{\int^(_)}{\int^(\_)}}\right]
= 0.\label{planner_optim_foc2}
\end{align}
Note that we have not proved that \eqref{planner_optim_foc} and
\eqref{planner_optim_foc2} are equivalent. We proved only that if
\eqref{planner_optim_foc} has a solution, then
\eqref{planner_optim_foc2} is equivalent (and has the same solution).
%%- \section{Implementation}
%%-
%%- The user inputs $b(y_{t-1},y_t,y_{t+1},u_t)$, $\beta$, and agent's
%%- first order conditions \eqref{constr}. The algorithm has to produce
%%- \eqref{planner_optim_foc2}.
%%-
\end{document}
\documentclass[10pt]{article}
\usepackage{array,natbib}
\usepackage{amsmath, amsthm, amssymb}
\usepackage[pdftex,colorlinks]{hyperref}
\begin{document}
\title{DSGE Models with Dynare++. A Tutorial.}
\author{Ondra Kamen\'\i k}
\date{February 2011}
\maketitle
\tableofcontents
\section{Setup}
The Dynare++ setup procedure is pretty straightforward as Dynare++ is included in the Dynare installation
packages which can be downloaded from \url{http://www.dynare.org}. Take the following steps:
\begin{enumerate}
\item Add the {\tt dynare++} subdirectory of the root Dynare installation directory to the your
operating system path. This ensures that your OS will find the {\tt dynare++} executable.
\item If you have MATLAB and want to run custom simulations (see \ref{custom}),
then you need to add to your MATLAB path the {\tt dynare++} subdirectory of
the root Dynare installation directory, and also directory containing the
\texttt{dynare\_simul\_} MEX file (note the trailing underscore). The easiest
way to add the latter is to run Dynare once in your MATLAB session (even
without giving it any MOD file).
\end{enumerate}
\section{Sample Session}
As an example, let us take a simple DSGE model whose dynamic
equilibrium is described by the following first order conditions:
\begin{align*}
&c_t\theta h_t^{1+\psi} = (1-\alpha)y_t\cr
&\beta E_t\left[\frac{\exp(b_t)c_t}{\exp(b_{t+1})c_{t+1}}
\left(\exp(b_{t+1})\alpha\frac{y_{t+1}}{k_{t+1}}+1-\delta\right)\right]=1\cr
&y_t=\exp(a_t)k_t^\alpha h_t^{1-\alpha}\cr
&k_{t}=\exp(b_{t-1})(y_{t-1}-c_{t-1})+(1-\delta)k_{t-1}\cr
&a_t=\rho a_{t-1}+\tau b_{t-1}+\epsilon_t\cr
&b_t=\tau a_{t-1}+\rho b_{t-1}+\nu_t
\end{align*}
\label{timing}
The timing of this model is that the exogenous shocks $\epsilon_t$,
and $\nu_t$ are observed by agents in the beginning of period $t$ and
before the end of period $t$ all endogenous variables with index $t$
are decided. The expectation operator $E_t$ works over the information
accumulated just before the end of the period $t$ (this includes
$\epsilon_t$, $\nu_t$ and all endogenous variables with index $t$).
The exogenous shocks $\epsilon_t$ and $\nu_t$ are supposed to be
serially uncorrelated with zero means and time-invariant
variance-covariance matrix. In Dynare++, these variables are called
exogenous; all other variables are endogenous. Now we are prepared to
start writing a model file for Dynare++, which is an ordinary text
file and could be created with any text editor.
The model file starts with a preamble declaring endogenous and
exogenous variables, parameters, and setting values of the
parameters. Note that one can put expression on right hand sides. The
preamble follows:
{\small
\begin{verbatim}
var Y, C, K, A, H, B;
varexo EPS, NU;
parameters beta, rho, alpha, delta, theta, psi, tau;
alpha = 0.36;
rho = 0.95;
tau = 0.025;
beta = 1/(1.03^0.25);
delta = 0.025;
psi = 0;
theta = 2.95;
\end{verbatim}
}
The section setting values of the parameters is terminated by a
beginning of the {\tt model} section, which states all the dynamic
equations. A timing convention of a Dynare++ model is the same as the
timing of our example model, so we may proceed with writing the model
equations. The time indexes of $c_{t-1}$, $c_t$, and $c_{t+1}$ are
written as {\tt C(-1)}, {\tt C}, and {\tt C(1)} resp. The {\tt model}
section looks as follows:
{\small
\begin{verbatim}
model;
C*theta*H^(1+psi) = (1-alpha)*Y;
beta*exp(B)*C/exp(B(1))/C(1)*
(exp(B(1))*alpha*Y(1)/K(1)+1-delta) = 1;
Y = exp(A)*K^alpha*H^(1-alpha);
K = exp(B(-1))*(Y(-1)-C(-1)) + (1-delta)*K(-1);
A = rho*A(-1) + tau*B(-1) + EPS;
B = tau*A(-1) + rho*B(-1) + NU;
end;
\end{verbatim}
}
At this point, almost all information that Dynare++ needs has been
provided. Only three things remain to be specified: initial values of
endogenous variables for non-linear solver, variance-covariance matrix
of the exogenous shocks and order of the Taylor approximation. Since
the model is very simple, there is a closed form solution for the
deterministic steady state. We use it as initial values for the
non-linear solver. Note that the expressions on the right hand-sides in
{\tt initval} section can reference values previously calculated. The
remaining portion of the model file looks as follows:
{\small
\begin{verbatim}
initval;
A = 0;
B = 0;
H = ((1-alpha)/(theta*(1-(delta*alpha)
/(1/beta-1+delta))))^(1/(1+psi));
Y = (alpha/(1/beta-1+delta))^(alpha/(1-alpha))*H;
K = alpha/(1/beta-1+delta)*Y;
C = Y - delta*K;
end;
vcov = [
0.0002 0.00005;
0.00005 0.0001
];
order = 7;
\end{verbatim}
}
Note that the order of rows/columns of the variance-covariance matrix
corresponds to the ordering of exogenous variables in the {\tt varexo}
declaration. Since the {\tt EPS} was declared first, its variance is
$0.0002$, and the variance of {\tt NU} is $0.0001$.
Let the model file be saved as {\tt example1.mod}. Now we are prepared
to solve the model. At the operating system command
prompt\footnote{Under Windows it is a {\tt cmd} program, under Unix it
is any shell} we issue a command:
{\small
\begin{verbatim}
dynare++ example1.mod
\end{verbatim}
}
When the program is finished, it produces two output files: a journal
file {\tt example1.jnl} and a Matlab MAT-4 {\tt example1.mat}. The
journal file contains information about time, memory and processor
resources needed for all steps of solution. The output file is more
interesting. It contains various simulation results. It can be loaded
into Matlab or Scilab and examined.%
\footnote{For Matlab {\tt load example1.mat}, for Scilab {\tt
mtlb\_load example1.mat}} The following examples are done in Matlab,
everything would be very similar in Scilab.
Let us first examine the contents of the MAT file:
{\small
\begin{verbatim}
>> load example1.mat
>> who
Your variables are:
dyn_g_1 dyn_i_Y dyn_npred
dyn_g_2 dyn_irfm_EPS_mean dyn_nstat
dyn_g_3 dyn_irfm_EPS_var dyn_shocks
dyn_g_4 dyn_irfm_NU_mean dyn_ss
dyn_g_5 dyn_irfm_NU_var dyn_state_vars
dyn_i_A dyn_irfp_EPS_mean dyn_steady_states
dyn_i_B dyn_irfp_EPS_var dyn_vars
dyn_i_C dyn_irfp_NU_mean dyn_vcov
dyn_i_EPS dyn_irfp_NU_var dyn_vcov_exo
dyn_i_H dyn_mean
dyn_i_K dyn_nboth
dyn_i_NU dyn_nforw
\end{verbatim}
}
All the variables coming from one MAT file have a common prefix. In
this case it is {\tt dyn}, which is Dynare++ default. The prefix can
be changed, so that the multiple results could be loaded into one Matlab
session.
In the default setup, Dynare++ solves the Taylor approximation to the
decision rule and calculates unconditional mean and covariance of the
endogenous variables, and generates impulse response functions. The
mean and covariance are stored in {\tt dyn\_mean} and {\tt
dyn\_vcov}. The ordering of the endogenous variables is given by {\tt
dyn\_vars}.
In our example, the ordering is
{\small
\begin{verbatim}
>> dyn_vars
dyn_vars =
H
A
Y
C
K
B
\end{verbatim}
}
and unconditional mean and covariance are
{\small
\begin{verbatim}
>> dyn_mean
dyn_mean =
0.2924
0.0019
1.0930
0.8095
11.2549
0.0011
>> dyn_vcov
dyn_vcov =
0.0003 0.0006 0.0016 0.0004 0.0060 0.0004
0.0006 0.0024 0.0059 0.0026 0.0504 0.0012
0.0016 0.0059 0.0155 0.0069 0.1438 0.0037
0.0004 0.0026 0.0069 0.0040 0.0896 0.0016
0.0060 0.0504 0.1438 0.0896 2.1209 0.0405
0.0004 0.0012 0.0037 0.0016 0.0405 0.0014
\end{verbatim}
}
The ordering of the variables is also given by indexes starting with
{\tt dyn\_i\_}. Thus the mean of capital can be retrieved as
{\small
\begin{verbatim}
>> dyn_mean(dyn_i_K)
ans =
11.2549
\end{verbatim}
}
\noindent and covariance of labor and capital by
{\small
\begin{verbatim}
>> dyn_vcov(dyn_i_K,dyn_i_H)
ans =
0.0060
\end{verbatim}
}
The impulse response functions are stored in matrices as follows
\begin{center}
\begin{tabular}{|l|l|}
\hline
matrix& response to\\
\hline
{\tt dyn\_irfp\_EPS\_mean}& positive impulse to {\tt EPS}\\
{\tt dyn\_irfm\_EPS\_mean}& negative impulse to {\tt EPS}\\
{\tt dyn\_irfp\_NU\_mean}& positive impulse to {\tt NU}\\
{\tt dyn\_irfm\_NU\_mean}& negative impulse to {\tt NU}\\
\hline
\end{tabular}
\end{center}
All shocks sizes are one standard error. Rows of the matrices
correspond to endogenous variables, columns correspond to
periods. Thus capital response to a positive shock to {\tt EPS} can be
plotted as
{\small
\begin{verbatim}
plot(dyn_irfp_EPS_mean(dyn_i_K,:));
\end{verbatim}
}
The data is in units of the respective variables, so in order to plot
the capital response in percentage changes from the decision rule's
fix point (which is a vector {\tt dyn\_ss}), one has to issue the
commands:
{\small
\begin{verbatim}
Kss=dyn_ss(dyn_i_K);
plot(100*dyn_irfp_EPS_mean(dyn_i_K,:)/Kss);
\end{verbatim}
}
The plotted impulse response shows that the model is pretty persistent
and that the Dynare++ default for a number of simulated periods is not
sufficient. In addition, the model persistence puts in doubt also a
number of simulations. The Dynare++ defaults can be changed when
calling Dynare++, in operating system's command prompt, we issue a
command:
{\small
\begin{verbatim}
dynare++ --per 300 --sim 150 example1.mod
\end{verbatim}
}
\noindent This sets the number of simulations to $150$ and the number
of periods to $300$ for each simulation giving $45000$ total simulated
periods.
\section{Sample Optimal Policy Session}
\label{optim_tut}
Suppose that one wants to solve the following optimal policy problem
with timeless perspective.\footnote{See \ref{ramsey} on how to solve
Ramsey optimality problem within this framework} The following
optimization problem is how to choose capital taxes financing public
good to maximize agent's utility from consumption good and public
good. The problem takes the form:
\begin{align*}
\max_{\{\tau_t\}_{t_0}^\infty}
E_{t_0}\sum_{t=t_0}^\infty &\beta^{t-t_0}\left(u(c_t)+av(g_t)\right)\\
\hbox{subject\ to}&\\
u'(c_t) &=
\beta E_t\left[u'(c_{t+1})\left(1-\delta+f'(k_{t+1})(1-\alpha\tau_{t+1})\right)\right]\\
K_t &= (1-\delta)K_{t-1} + (f(K_{t-1}) - c_{t-1} - g_{t-1})\\
g_t &= \tau_t\alpha f(K_t),\\
\hbox{where\ } t & = \ldots,t_0-1,t_0,t_0+1,\ldots
\end{align*}
$u(c_t)$ is utility from consuming the consumption good, $v(g_t)$ is
utility from consuming the public good, $f(K_t)$ is a production
function $f(K_t) = Z_tK_t^\alpha$. $Z_t$ is a technology shock modeled
as AR(1) process. The three constraints come from the first order
conditions of a representative agent. We suppose that it pursues a
different objective, namely lifetime utility involving only
consumption $c_t$. The representative agents chooses between
consumption and investment. It rents the capital to firms and supplies
constant amount of labour. All output is paid back to consumer in form
of wage and capital rent. Only the latter is taxed. We suppose that
the optimal choice has been taking place from infinite past and will
be taking place for ever. Further we suppose the same about the
constraints.
Let us choose the following functional forms:
\begin{eqnarray*}
u(c_t) &=& \frac{c_t^{1-\eta}}{1-\eta}\\
v(g_t) &=& \frac{g_t^{1-\phi}}{1-\phi}\\
f(K_t) &=& K_t^\alpha
\end{eqnarray*}
Then the problem can be coded into Dynare++ as follows. We start with
a preamble which states all the variables, shocks and parameters:
{\small
\begin{verbatim}
var C G K TAU Z;
varexo EPS;
parameters eta beta alpha delta phi a rho;
eta = 2;
beta = 0.99;
alpha = 0.3;
delta = 0.10;
phi = 2.5;
a = 0.1;
rho = 0.7;
\end{verbatim}
}
Then we specify the planner's objective and the discount factor in the
objective. The objective is an expression (possibly including also
variable leads and lags), and the discount factor must be one single
declared parameter:
{\small
\begin{verbatim}
planner_objective C^(1-eta)/(1-eta) + a*G^(1-phi)/(1-phi);
planner_discount beta;
\end{verbatim}
}
The model section will contain only the constraints of the social
planner. These are capital accumulation, identity for the public
product, AR(1) process for $Z_t$ and the first order condition of the
representative agent (with different objective).
{\small
\begin{verbatim}
model;
K = (1-delta)*K(-1) + (exp(Z(-1))*K(-1)^alpha - C(-1) - G(-1));
G = TAU*alpha*K^alpha;
Z = rho*Z(-1) + EPS;
C^(-eta) = beta*C(+1)^(-eta)*(1-delta +
exp(Z(+1))*alpha*K(+1)^(alpha-1)*(1-alpha*TAU(+1)));
end;
\end{verbatim}
}
Now we have to provide a good guess for non-linear solver calculating
the deterministic steady state. The model's steady state has a closed
form solution if the taxes are known. So we provide a guess for
taxation {\tt TAU} and then use the closed form solution for capital,
public good and consumption:\footnote{Initial guess for Lagrange
multipliers and some auxiliary variables is calculated automatically. See
\ref{opt_init} for more details.}
{\small
\begin{verbatim}
initval;
TAU = 0.70;
K = ((delta+1/beta-1)/(alpha*(1-alpha*TAU)))^(1/(alpha-1));
G = TAU*alpha*K^alpha;
C = K^alpha - delta*K - G;
Z = 0;
\end{verbatim}
}
Finally, we have to provide the order of approximation, and the
variance-covariance matrix of the shocks (in our case we have only one
shock):
{\small
\begin{verbatim}
order = 4;
vcov = [
0.01
];
\end{verbatim}
}
After this model file has been run, we can load the resulting MAT-file
into the Matlab (or Scilab) and examine its contents:
{\small
\begin{verbatim}
>> load kp1980_2.mat
>> who
Your variables are:
dyn_g_1 dyn_i_MULT1 dyn_nforw
dyn_g_2 dyn_i_MULT2 dyn_npred
dyn_g_3 dyn_i_MULT3 dyn_nstat
dyn_g_4 dyn_i_TAU dyn_shocks
dyn_i_AUX_3_0_1 dyn_i_Z dyn_ss
dyn_i_AUX_4_0_1 dyn_irfm_EPS_mean dyn_state_vars
dyn_i_C dyn_irfm_EPS_var dyn_steady_states
dyn_i_EPS dyn_irfp_EPS_mean dyn_vars
dyn_i_G dyn_irfp_EPS_var dyn_vcov
dyn_i_K dyn_mean dyn_vcov_exo
dyn_i_MULT0 dyn_nboth
\end{verbatim}
}
The data dumped into the MAT-file have the same structure as in the
previous example of this tutorial. The only difference is that
Dynare++ added a few more variables. Indeed:
{\small
\begin{verbatim}
>> dyn_vars
dyn_vars =
MULT1
G
MULT3
C
K
Z
TAU
AUX_3_0_1
AUX_4_0_1
MULT0
MULT2
\end{verbatim}
}
Besides the five variables declared in the model ({\tt C}, {\tt G},
{\tt K}, {\tt TAU}, and {\tt Z}), Dy\-na\-re++ added 6 more, four as Lagrange
multipliers of the four constraints, two as auxiliary variables for
shifting in time. See \ref{aux_var} for more details.
The structure and the logic of the MAT-file is the same as these new 6
variables were declared in the model file and the file is examined in
the same way.
For instance, let us examine the Lagrange multiplier of the optimal
policy associated with the consumption first order condition. Recall
that the consumers' objective is different from the policy
objective. Therefore, the constraint will be binding and the
multiplier will be non-zero. Indeed, its deterministic steady state,
fix point and mean are as follows:
{\small
\begin{verbatim}
>> dyn_steady_states(dyn_i_MULT3,1)
ans =
-1.3400
>> dyn_ss(dyn_i_MULT3)
ans =
-1.3035
>> dyn_mean(dyn_i_MULT3)
ans =
-1.3422
\end{verbatim}
}
\section{What Dynare++ Calculates}
\label{dynpp_calc}
Dynare++ solves first order conditions of a DSGE model in the recursive form:
\begin{equation}\label{focs}
E_t[f(y^{**}_{t+1},y_t,y^*_{t-1},u_t)]=0,
\end{equation}
where $y$ is a vector of endogenous variables, and $u$ a vector of
exogenous variables. Some of elements of $y$ can occur at time $t+1$,
these are $y^{**}$. Elements of $y$ occurring at time $t-1$ are denoted
$y^*$. The exogenous shocks are supposed to be serially independent
and normally distributed $u_t\sim N(0,\Sigma)$.
The solution of this dynamic system is a decision rule
\[
y_t=g(y^*_{t-1},u_t)
\]
Dynare++ calculates a Taylor approximation of this decision rule of a
given order. The approximation takes into account deterministic
effects of future volatility, so a point about which the Taylor
approximation is done will be different from the fix point $y$ of the rule
yielding $y=g(y^*,0)$.
The fix point of a rule corresponding to a model with $\Sigma=0$ is
called {\it deterministic steady state} denoted as $\bar y$. In
contrast to deterministic steady state, there is no consensus in
literature how to call a fix point of the rule corresponding to a
model with non-zero $\Sigma$. I am tempted to call it {\it stochastic
steady state}, however, it might be confused with unconditional mean
or with steady distribution. So I will use a term {\it fix point} to
avoid a confusion.
By default, Dynare++ solves the Taylor approximation about the
deterministic steady state. Alternatively, Dynare++ can split the
uncertainty to a few steps and take smaller steps when calculating the
fix points. This is controlled by an option {\tt --steps}. For the
brief description of the second method, see \ref{multistep_alg}.
\subsection{Decision Rule Form}
\label{dr_form}
In case of default solution algorithm (approximation about the
deterministic steady state $\bar y$), Dynare++ calculates the higher
order derivatives of the equilibrium rule to get a decision rule of
the following form. In Einstein notation, it is:
\[
y_t-\bar y = \sum_{i=0}^k\frac{1}{i!}\left[g_{(y^*u)^i}\right]
_{\alpha_1\ldots\alpha_i}
\prod_{j=1}^i\left[\begin{array}{c} y^*_{t-1}-\bar y^*\\ u_t \end{array}\right]
^{\alpha_j}
\]
Note that the ergodic mean will be different from the deterministic
steady state $\bar y$ and thus deviations $y^*_{t-1}-\bar y^*$ will
not be zero in average. This implies that in average we will commit
larger round off errors than if we used the decision rule expressed in
deviations from a point closer to the ergodic mean. Therefore, by
default, Dynare++ recalculates this rule and expresses it in
deviations from the stochastic fix point $y$.
\[
y_t-y = \sum_{i=1}^k\frac{1}{i!}\left[\tilde g_{(y^*u)^i}\right]
_{\alpha_1\ldots\alpha_i}
\prod_{j=1}^i\left[\begin{array}{c} y^*_{t-1}-y^*\\ u_t \end{array}\right]
^{\alpha_j}
\]
Note that since the rule is centralized around its fix point, the
first term (for $i=0$) drops out.
Also note, that this rule mathematically equivalent to the rule
expressed in deviations from the deterministic steady state, and still
it is an approximation about the deterministic steady state. The fact
that it is expressed in deviations from a different point should not
be confused with the algorithm in \ref{multistep_alg}.
This centralization can be avoided by invoking {\tt --no-centralize}
command line option.
\subsection{Taking Steps in Volatility Dimension}
\label{multistep_alg}
For models, where volatility of the exogenous shocks plays a big
role, the approximation about deterministic steady state can be poor,
since the equilibrium dynamics can be very different from the dynamics
in the vicinity of the perfect foresight (deterministic steady state).
Therefore, Dynare++ has on option {\tt --steps} triggering a multistep
algorithm. The algorithm splits the volatility to a given number of
steps. Dynare++ attempts to calculate approximations about fix points
corresponding to these levels of volatility. The problem is that if we
want to calculate higher order approximations about fix points
corresponding to volatilities different from zero (as in the case of
deterministic steady state), then the derivatives of lower orders
depend on derivatives of higher orders with respect to forward looking
variables. The multistep algorithm in each step approximates the
missing higher order derivatives with extrapolations based on the
previous step.
In this way, the approximation of the stochastic fix point and the
derivatives about this fix point are obtained. It is difficult to a
priori decide whether this algorithm yields a better decision
rule. Nothing is guaranteed, and the resulted decision rule should be
checked with a numerical integration. See \ref{checks}.
\subsection{Simulating the Decision Rule}
After some form of a decision rule is calculated, it is simulated to
obtain draws from ergodic (unconditional) distribution of endogenous
variables. The mean and the covariance are reported. There are two
ways how to calculate the mean and the covariance. The first one is to
store all simulated samples and calculate the sample mean and
covariance. The second one is to calculate mean and the covariance in
the real-time not storing the simulated sample. The latter case is
described below (see \ref{rt_simul}).
The stored simulated samples are then used for impulse response
function calculations. For each shock, the realized shocks in these
simulated samples (control simulations) are taken and an impulse is
added and the new realization of shocks is simulated. Then the control
simulation is subtracted from the simulation with the impulse. This is
done for all control simulations and the results are averaged. As the
result, we get an expectation of difference between paths with impulse
and without impulse. In addition, the sample variances are
reported. They might be useful for confidence interval calculations.
For each shock, Dynare++ calculates IRF for two impulses, positive and
negative. Size of an impulse is one standard error of a respective
shock.
The rest of this subsection is divided to three parts giving account
on real-time simulations, conditional simulations, and on the way how
random numbers are generated resp.
\subsubsection{Simulations With Real-Time Statistics}
\label{rt_simul}
When one needs to simulate large samples to get a good estimate of
unconditional mean, simulating the decision rule with statistics
calculated in real-time comes handy. The main reason is that the
storing of all simulated samples may not fit into the available
memory.
The real-time statistics proceed as follows: We model the ergodic
distribution as having normal distribution $y\sim N(\mu,\Sigma)$. Further,
the parameters $\mu$ and $\Sigma$ are modelled as:
\begin{eqnarray*}
\Sigma &\sim& {\rm InvWishart}_\nu(\Lambda)\\
\mu|\Sigma &\sim& N(\bar\mu,\Sigma/\kappa) \\
\end{eqnarray*}
This model of $p(\mu,\Sigma)$ has an advantage of conjugacy, i.e. a
prior distribution has the same form as posterior. This property is
used in the calculation of real-time estimates of $\mu$ and $\Sigma$,
since it suffices to maintain only the parameters of $p(\mu,\Sigma)$
conditional observed draws so far. The parameters are: $\nu$,
$\Lambda$, $\kappa$, and $\bar\mu$.
The mean of $\mu,\Sigma|Y$, where $Y$ are all the draws (simulated
periods) is reported.
\subsubsection{Conditional Distributions}
\label{cond_dist}
Starting with version 1.3.6, Dynare++ calculates variable
distributions $y_t$ conditional on $y_0=\bar y$, where $\bar y$ is the
deterministic steady state. If triggered, Dynare++ simulates a given
number of samples with a given number of periods all starting at
the deterministic steady state. Then for each time $t$, mean
$E[y_t|y_0=\bar y]$ and variances $E[(y_t-E[y_t|y_0=\bar
y])(y_t-E[y_t|y_0=\bar y])^T|y_0=\bar y]$ are reported.
\subsubsection{Random Numbers}
\label{random_numbers}
For generating of the pseudo random numbers, Dynare++ uses Mersenne
twister by Makoto Matsumoto and Takuji Nishimura. Because of the
parallel nature of Dynare++ simulations, each simulated sample gets
its own instance of the twister. Each such instance is seeded before
the simulations are started. This is to prevent additional randomness
implied by the operating system's thread scheduler to interfere with
the pseudo random numbers.
For seeding the individual instances of the Mersenne twister assigned
to each simulated sample the system (C library) random generator is
used. These random generators do not have usually very good
properties, but we use them only to seed the Mersenne twister
instances. The user can set the initial seed of the system random
generator and in this way deterministically choose the seeds of all
instances of the Mersenne twister.
In this way, it is guaranteed that two runs of Dynare++
with the same seed will yield the same results regardless the
operating system's scheduler. The only difference may be caused by a
different round-off errors committed when the same set of samples are
summed in the different order (due to the operating system's scheduler).
\subsection{Numerical Approximation Checks}
\label{checks}
Optionally, Dynare++ can run three kinds of checks for Taylor
approximation errors. All three methods numerically calculate
the residual of the DSGE equations
\[
E[f(g^{**}(g^*(y^*,u),u'),g(y^*,u),y^*,u)|y^*,u]
\]
which must be ideally zero for all $y^*$ and $u$. This integral is
evaluated by either product or Smolyak rule applied to one dimensional
Gauss--Hermite quadrature. The user does not need to care about the
decision. An algorithm yielding higher quadrature level and less
number of evaluations less than a user given maximum is selected.
The three methods differ only by a set of $y^*$ and $u$ where the
residuals are evaluated. These are:
\begin{itemize}
\item The first method calculates the residuals along the shocks for
fixed $y^*$ equal to the fix point. We let all elements of $u$ be
fixed at $0$ but one element, which varies from $-\mu\sigma$ to
$\mu\sigma$, where $\sigma$ is a standard error of the element and
$\mu$ is the user given multiplier. In this way we can see how the
approximation error grows if the fix point is disturbed by a shock of
varying size.
\item The second method calculates the residuals along a simulation
path. A random simulation is run, and at each point the residuals are
reported.
\item The third method calculates the errors on an ellipse of the
state variables $y^*$. The shocks $u$ are always zero. The ellipse is
defined as
\[\{Ax|\; \Vert x\Vert_2=\mu\},\]
where $\mu$ is a user given multiplier, and $AA^T=V$ for $V$ being a
covariance of endogenous variables based on the first order
approximation. The method calculates the residuals at low discrepancy
sequence of points on the ellipse. Both the residuals and the points
are reported.
\end{itemize}
\section{Optimal Policy with Dynare++}
\label{optim}
Starting with version 1.3.2, Dynare++ is able to automatically
generate and then solve the first order conditions for a given
objective and (possibly) forward looking constraints. Since the
constraints can be forward looking, the use of this feature will
mainly be in optimal policy or control.
The only extra thing which needs to be added to the model file is a
specification of the policy's objective. This is done by two keywords,
placed not before parameter settings. If the objective is to maximize
$$E_{t_0}\sum_{t=t_0}^\infty\beta^{t-t_0}\left[\frac{c_t^{1-\eta}}{1-\eta}+
a\frac{g_t^{1-\phi}}{1-\phi}\right],$$
then the keywords will be:
{\small
\begin{verbatim}
planner_objective C^(1-eta)/(1-eta) + a*G^(1-phi)/(1-phi);
planner_discount beta;
\end{verbatim}
}
Dynare++ parses the file and if the two keywords are present, it
automatically derives the first order conditions for the problem. The
first order conditions are put to the form \eqref{focs} and solved. In
this case, the equations in the {\tt model} section are understood as
the constraints (they might come as the first order conditions from
optimizations of other agents) and their number must be less than the
number of endogenous variables.
This section further describes how the optimal policy first order
conditions look like, then discusses some issues with the initial
guess for deterministic steady state, and finally describes how to
simulate Ramsey policy within this framework.
\subsection{First Order Conditions}
Mathematically, the optimization problem looks as follows:
\begin{align}
\max_{\left\{y_\tau\right\}^\infty_t}&E_t
\left[\sum_{\tau=t}^\infty\beta^{\tau-t}b(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right]\notag\\
&\rm{s.t.}\label{planner_optim}\\
&\hskip1cm E^I_\tau\left[f(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right]=0\quad\rm{for\ }
\tau=\ldots,t-1,t,t+1,\ldots\notag
\end{align}
where $E^I$ is an expectation operator over an information set including,
besides all the past, all future realizations of policy's control
variables and distributions of future shocks $u_t\sim
N(0,\Sigma)$. The expectation operator $E$ integrates over an
information including only distributions of $u_t$ (besides the past).
Note that the constraints $f$ take place at all times, and they are
conditioned at the running $\tau$ since the policy knows that the
agents at time $\tau$ will use all the information available at
$\tau$.
The maximization problem can be rewritten using Lagrange multipliers as:
\begin{align}
\max_{y_t}E_t&\left[\sum_{\tau=t}^\infty\beta^{\tau-t}b(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right.\notag\\
&\left.+\sum_{\tau=-\infty}^{\infty}\beta^{\tau-t}\lambda^T_\tau E_\tau^I\left[f(y_{\tau-1},y_\tau,y_{\tau+1},u_\tau)\right]\right],
\label{planner_optim_l}
\end{align}
where $\lambda_t$ is a column vector of Lagrange multipliers.
After some manipulations with compounded expectations over different
information sets, one gets the following first order conditions:
\begin{align}
E_t\left[\vphantom{\frac{\int^(_)}{\int^(\_)}}\right.&\frac{\partial}{\partial y_t}b(y_{t-1},y_t,y_{t+1},u_t)+
\beta L^{+1}\frac{\partial}{\partial y_{t-1}}b(y_{t-1},y_t,y_{t+1},u_t)\notag\\
&+\beta^{-1}\lambda_{t-1}^TL^{-1}\frac{\partial}{\partial y_{t+1}}f(y_{t-1},y_t,y_{t+1},u_t)\notag\\
&+\lambda_t^T\frac{\partial}{\partial y_{t}}f(y_{t-1},y_t,y_{t+1},u_t)\notag\\
&+\beta\lambda_{t+1}^TE_{t+1}\left[L^{+1}\frac{\partial}{\partial y_{t-1}}f(y_{t-1},y_t,y_{t+1},u_t)\right]
\left.\vphantom{\frac{\int^(_)}{\int^(\_)}}\right]
= 0,\label{planner_optim_foc2}
\end{align}
where $L^{+1}$ is one period lead operator, and $L^{-1}$ is one period lag operator.
Dynare++ takes input corresponding to \eqref{planner_optim},
introduces the Lagrange multipliers according to
\eqref{planner_optim_l}, and using its symbolic derivator it compiles
\eqref{planner_optim_foc2}. The system \eqref{planner_optim_foc2} with
the constraints from \eqref{planner_optim_l} is then solved in the
same way as the normal input \eqref{focs}.
\subsection{Initial Guess for Deterministic Steady State}
\label{opt_init}
Solving deterministic steady state of non-linear dynamic systems is
not trivial and the first order conditions for optimal policy add
significant complexity. The {\tt initval} section allows to input the
initial guess of the non-linear solver. It requires that all user
declared endogenous variables be initialized. However, in most cases,
we have no idea what are good initial guesses for the Lagrange
multipliers.
For this reason, Dynare++ calculates an initial guess of Lagrange
multipliers using user provided initial guesses of all other
endogenous variables. It uses the linearity of the Lagrange
multipliers in the \eqref{planner_optim_foc2}. In its static form,
\eqref{planner_optim_foc2} looks as follows:
\begin{align}
&\frac{\partial}{\partial y_t}b(y,y,y,0)+
\beta\frac{\partial}{\partial y_{t-1}}b(y,y,y,0)\notag\\
&+\lambda^T\left[\beta^{-1}\frac{\partial}{\partial y_{t+1}}f(y,y,y,0)
+\frac{\partial}{\partial y_{t}}f(y,y,y,0)
+\beta\frac{\partial}{\partial y_{t-1}}f(y,y,y,0)\right]
= 0\label{planner_optim_static}
\end{align}
The user is required to provide an initial guess of all declared
variables (all $y$). Then \eqref{planner_optim_static} becomes an
overdetermined linear system in $\lambda$, which is solved by means of
the least squares. The closer the initial guess of $y$ is to the exact
solution, the closer are the Lagrange multipliers $\lambda$.
The calculated Lagrange multipliers by the least squares are not used,
if they are set in the {\tt initval} section. In other words, if a
multiplier has been given a value in the {\tt initval} section, then
the value is used, otherwise the calculated value is taken.
For even more difficult problems, Dynare++ generates two Matlab files
calculating a residual of the static system and its derivative. These
can be used in Matlab's {\tt fsolve} or other algorithm to get an
exact solution of the deterministic steady state. See
\ref{output_matlab_scripts} for more details.
Finally, Dynare++ might generate a few auxiliary variables. These are
simple transformations of other variables. They are initialized
automatically and the user usually does not need to care about it.
\subsection{Optimal Ramsey Policy}
\label{ramsey}
Dynare++ solves the optimal policy problem with timeless
perspective. This means that it assumes that the constraints in
\eqref{planner_optim} are valid from the infinite past to infinite
future. Dynare++ calculation of ergodic distribution then assumes that
the policy has been taking place from infinite past.
If some constraints in \eqref{planner_optim} are forward looking, this
will result in some backward looking Lagrange multipliers. Such
multipliers imply possibly time inconsistent policy in the states of
the ``original'' economy, since these backward looking multipliers add
new states to the ``optimized'' economy. In this respect, the timeless
perspective means that there is no fixed initial distribution of such
multipliers, instead, their ergodic distribution is taken.
In contrast, Ramsey optimal policy is started at $t=0$. This means
that the first order conditions at $t=0$ are different than the first
order conditions at $t\geq 1$, which are
\eqref{planner_optim_foc2}. However, it is not difficult to assert
that the first order conditions at $t=0$ are in the form of
\eqref{planner_optim_foc2} if all the backward looking Lagrange
multipliers are set to zeros at period $-1$, i.e. $\lambda_{-1}=0$.
All in all, the solution of \eqref{planner_optim_foc2} calculated by
Dynare++ can be used as a Ramsey optimal policy solution provided that
all the backward looking Lagrange multipliers were set to zeros prior
to the first simulation period. This can be done by setting the
initial state of a simulation path in {\tt dynare\_simul.m}. If this
is applied on the example from \ref{optim_tut}, then we may do the
following in the command prompt:
{\small
\begin{verbatim}
>> load kp1980_2.mat
>> shocks = zeros(1,100);
>> ystart = dyn_ss;
>> ystart(dyn_i_MULT3) = 0;
>> r=dynare_simul('kp1980_2.mat',shocks,ystart);
\end{verbatim}
}
This will simulate the economy if the policy was introduced in the
beginning and no shocks happened.
More information on custom simulations can be obtained by typing:
{\small
\begin{verbatim}
help dynare_simul
\end{verbatim}
}
\section{Running Dynare++}
This section deals with Dynare++ input. The first subsection
\ref{dynpp_opts} provides a list of command line options, next
subsection \ref{dynpp_mod} deals with a format of Dynare++ model file,
and the last subsection discusses incompatibilities between Dynare
Matlab and Dynare++.
\subsection{Command Line Options}
\label{dynpp_opts}
The calling syntax of the Dynare++ is
{\small
\begin{verbatim}
dynare++ [--help] [--version] [options] <model file>
\end{verbatim}
}
\noindent where the model file must be given as the last token and
must include its extension. The model file may include path, in this
case, the path is taken relative to the current directory. Note that
the current directory can be different from the location of {\tt
dynare++} binary.
The options are as follows:
\def\desc#1{\rlap{#1}\kern4cm}
\begin{description}
\item[\desc{\tt --help}] This prints a help message and exits.
\item[\desc{\tt --version}] This prints a version information and
exits.
\item[\desc{\tt --per \it num}] This sets a number of simulated
periods to {\it num} in addition to the burn-in periods. This number
is used when calculating unconditional mean and covariance and for
IRFs. Default is 100.
\item[\desc{\tt --burn \it num}] This sets a number of initial periods
which should be ignored from the statistics. The burn-in periods are
used to eliminate the influence of the starting point when
calculating ergodic distributions or/and impulse response
functions. The number of simulated period given by {\tt --per \it
num} option does not include the number of burn-in
periods. Default is 0.
\item[\desc{\tt --sim \it num}] This sets a number of stochastic
simulations. This number is used when calculating unconditional mean
and covariance and for IRFs. The total sample size for unconditional
mean and covariance is the number of periods times the number of
successful simulations. Note that if a simulation results in {\tt NaN}
or {\tt +-Inf}, then it is thrown away and is not considered for the
mean nor the variance. The same is valid for IRF. Default is 80.
\item[\desc{\tt --rtsim \it num}] This sets a number of stochastic
simulations whose statistics are calculated in the real-time. This
number excludes the burn-in periods set by {\tt --burn \it num}
option. See \ref{rt_simul} for more details. Default is 0, no
simulations.
\item[\desc{\tt --rtper \it num}] This sets a number of simulated
periods per one simulation with real-time statistics to {\it num}. See
\ref{rt_simul} for more details. Default is 0, no simulations.
\item[\desc{\tt --condsim \it num}] This sets a number of stochastic
conditional simulations. See \ref{cond_dist} for more details. Default
is 0, no simulations.
\item[\desc{\tt --condper \it num}] This sets a number of simulated
periods per one conditional simulation. See \ref{cond_dist} for more
details. Default is 0, no simulations.
\item[\desc{\tt --steps \it num}] If the number {\it num} is greater
than 0, this option invokes a multi-step algorithm (see section
\ref{dynpp_calc}), which in the given number of steps calculates fix
points and approximations of the decision rule for increasing
uncertainty. Default is 0, which invokes a standard algorithm for
approximation about deterministic steady state. For more details,
see \ref{multistep_alg}.
\item[\desc{\tt --centralize}] This option causes that the resulting
decision rule is centralized about (in other words: expressed in the
deviations from) the stochastic fix point. The centralized decision
rule is mathematically equivalent but has an advantage of yielding
less numerical errors in average than not centralized decision
rule. By default, the rule is centralized. For more details, see
\ref{dr_form}.
\item[\desc{\tt --no-centralize}] This option causes that the
resulting decision rule is not centralized about (in other words:
expressed in the deviations from) the stochastic fix point. By
default, the rule is centralized. For more details, see
\ref{dr_form}.
This option has no effect if the number of steps given by {\tt
--steps} is greater than 0. In this case, the rule is always
centralized.
\item[\desc{\tt --prefix \it string}] This sets a common prefix of
variables in the output MAT file. Default is {\tt dyn}.
\item[\desc{\tt --seed \it num}] This sets an initial seed for the
random generator providing seed to generators for each sample. See
\ref{random_numbers} for more details. Default is 934098.
\item[\desc{\tt --order \it num}] This sets the order of approximation
and overrides the {\tt order} statement in the model file. There is no
default.
\item[\desc{\tt --threads \it num}] This sets a number of parallel
threads. Complex evaluations of Faa Di Bruno formulas, simulations and
numerical integration can be parallelized, Dynare++ exploits this
advantage. You have to have a hardware support for this, otherwise
there is no gain from the parallelization. As a rule of thumb, set the
number of threads to the number of processors. An exception is a
machine with Pentium 4 with Hyper Threading (abbreviated by HT). This
processor can run two threads concurrently. The same applies to
Dual-Core processors. Since these processors are present in most new
PC desktops/laptops, the default is 2.
\item[\desc{\tt --ss-tol \it float}] This sets the tolerance of the
non-linear solver of deterministic steady state to {\it float}. It is
in $\Vert\cdot\Vert_\infty$ norm, i.e. the algorithm is considered as
converged when a maximum absolute residual is less than the
tolerance. Default is $10^{-13}$.
\item[\desc{\tt --check \it pPeEsS}] This selects types of residual
checking to be performed. See section \ref{checks} for details. The
string consisting of the letters ``pPeEsS'' governs the selection. The
upper-case letters switch a check on, the lower-case letters
off. ``P'' stands for checking along a simulation path, ``E'' stands
for checking on ellipse, and finally ``S'' stands for checking along
the shocks. It is possible to choose more than one type of check. The
default behavior is that no checking is performed.
\item[\desc{\tt --check-evals \it num}] This sets a maximum number of
evaluations per one re\-sidual. The actual value depends on the selected
algorithm for the integral evaluation. The algorithm can be either
product or Smolyak quadrature and is chosen so that the actual number
of evaluations would be minimal with maximal level of
quadrature. Default is 1000.
\item[\desc{\tt --check-num \it num}] This sets a number of checked
points in a residual check. One input value $num$ is used for all
three types of checks in the following way:
\begin{itemize}
\item For checks along the simulation, the number of simulated periods
is $10\cdot num$
\item For checks on ellipse, the number of points on ellipse is $10\cdot num$
\item For checks along the shocks, the number of checked points
corresponding to shocks from $0$ to $\mu\sigma$ (see \ref{checks}) is
$num$.
\end{itemize}
Default is 10.
\item[\desc{\tt --check-scale \it float}] This sets the scaling factor
$\mu$ for checking on ellipse to $0.5\cdot float$ and scaling factor
$\mu$ for checking along shocks to $float$. See section
\ref{checks}. Default is 2.0.
\item[\desc{\tt --no-irfs}] This suppresses IRF calculations. Default
is to calculate IRFs for all shocks.
\item[\desc{\tt --irfs}] This triggers IRF calculations. If there are
no shock names following the {\tt --irfs} option, then IRFs for all
shocks are calculated, otherwise see below. Default is to calculate
IRFs for all shocks.
\item[\desc{\tt --irfs \it shocklist}] This triggers IRF calculations
only for the listed shocks. The {\it shocklist} is a space separated
list of exogenous variables for which the IRFs will be
calculated. Default is to calculate IRFs for all shocks.
\end{description}
The following are a few examples:
{\small
\begin{verbatim}
dynare++ --sim 300 --per 50 blah.mod
dynare++ --check PE --check-num 15 --check-evals 500 blah.dyn
dynare++ --steps 5 --check S --check-scale 3 blahblah.mod
\end{verbatim}
}
The first one sets the number of periods for IRF to 50, and sets a sample
size for unconditional mean and covariance calculations to 6000. The
second one checks the decision rule along a simulation path having 150
periods and on ellipse at 150 points performing at most 500 evaluations
per one residual. The third one solves the model in five steps and
checks the rule along all the shocks from $-3\sigma$ to $3\sigma$ in
$2*10+1$ steps (10 for negative, 10 for positive and 1 for at zero).
\subsection{Dynare++ Model File}
\label{dynpp_mod}
In its strictest form, Dynare++ solves the following mathematical problem:
\begin{equation}\label{basic_form}
E_t[f(y^{**}_{t+1},y_t,y^*_{t-1},u_t)]=0
\end{equation}
This problem is input either directly, or it is an output of Dynare++
routines calculating first order conditions of the optimal policy
problem. In either case, Dynare++ performs necessary and
mathematically correct substitutions to put the user specified problem
to the \eqref{basic_form} form, which goes to Dynare++ solver. The
following discusses a few timing issues:
\begin{itemize}
\item Endogenous variables can occur, starting from version 1.3.4, at
times after $t+1$. If so, an equation containing such occurrence is
broken to non-linear parts, and new equations and new auxiliary
variables are automatically generated only for the non-linear terms
containing the occurrence. Note that shifting such terms to time $t+1$
may add occurrences of some other variables (involved in the terms) at
times before $t-1$ implying addition of auxiliary variables to bring
those variables to $t-1$.
\item Variables declared as shocks may occur also at arbitrary
times. If before $t$, additional endogenous variables are used to
bring them to time $t$. If after $t$, then similar method is used as
for endogenous variables occurring after $t+1$.
\item There is no constraint on variables occurring at both times
$t+1$ (or later) and $t-1$ (or earlier). Virtually, all variables can
occur at arbitrary times.
\item Endogenous variables can occur at times before $t-1$. If so,
additional endogenous variables are added for all lags between the
variable and $t-1$.
\item Dynare++ applies the operator $E_t$ to all occurrences at time
$t+1$. The realization of $u_t$ is included in the information set of
$E_t$. See an explanation of Dynare++ timing on page \pageref{timing}.
\end{itemize}
The model equations are formulated in the same way as in Matlab
Dynare. The time indexes different from $t$ are put to round
parenthesis in this way: {\tt C(-1)}, {\tt C}, {\tt C(+1)}.
The mathematical expressions can use the following functions and operators:
\begin{itemize}
\item binary {\tt + - * / \verb|^|}
\item unary plus and minus minus as in {\tt a = -3;} and {\tt a = +3;} resp.
\item unary mathematical functions: {\tt log exp sin cos tan
sqrt}, whe\-re the logarithm has a natural base
\item symbolic differentiation operator {\tt diff(expr,symbol)}, where
{\tt expr} is a mathematical expression and {\tt symbol} is a unary
symbol (a variable or a parameter); for example {\tt
diff(A*K(-1)\verb|^|alpha*L\verb|^|(1-alpha),K(-1))} is internally expanded as
{\tt A*alpha*K(-1)\verb|^|(alpha-1)*L\verb|^|(1-alpha)}
\item unary error function and complementary error function: {\tt erf}
and {\tt erfc} defined as
\begin{eqnarray*}
erf(x) &= \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}{\rm d}t\\
erfc(x)&= \frac{2}{\sqrt{\pi}}\int_x^\infty e^{-t^2}{\rm d}t
\end{eqnarray*}
\end{itemize}
The model file can contain user comments. Their usage can be
understood from the following piece of the model file:
{\small
\begin{verbatim}
P*C^(-gamma) = // line continues until semicolon
beta*C(+1)^(-gamma)*(P(+1)+Y(+1)); // asset price
// choose dividend process: (un)comment what you want
Y/Y_SS = (Y(-1)/Y_SS)^rho*exp(EPS);
/*
Y-Y_SS = rho*(Y(-1)-Y_SS)+EPS;
*/
\end{verbatim}
}
\subsection{Incompatibilities with Matlab Dynare}
This section provides a list of incompatibilities between a model file
for Dy\-na\-re++ and Matlab Dynare. These must be considered when a model
file for Matlab Dynare is being migrated to Dynare++. The list is the
following:
\begin{itemize}
\item There is no {\tt periods} keyword.
\item The parameters cannot be lagged or leaded, I think that Dynare
Matlab allows it, but the semantics is the same (parameter is a
constant).
\item There are no commands like {\tt steady}, {\tt check}, {\tt
simul}, {\tt stoch\_simul}, etc.
\item There are no sections like {\tt estimated\_params}, {\tt
var\_obs}, etc.
\item The variance-covariance matrix of endogenous shocks is given by
{\tt vcov} matrix in Dynare++. An example follows. Starting from
version 1.3.5, it is possible for vcov to be positive semi-definite
matrix.
{\small
\begin{verbatim}
vcov = [
0.05 0 0 0;
0 0.025 0 0;
0 0 0.05 0;
0 0 0 0.025
];
\end{verbatim}
}
\end{itemize}
\section{Dynare++ Output}
There are three output files; a data file in MAT-4 format containing
the output data (\ref{matfile}), a journal text file containing an
information about the Dynare++ run (\ref{journalfile}), and a dump
file (\ref{dumpfile}). Further, Dynare++ generates two Matlab script
files, which calculate a residual and the first derivative of the
residual of the static system (\ref{output_matlab_scripts}). These are
useful when calculating the deterministic steady state outside
Dynare++.
Note that all output files are created in the current directory of
the Dynare++ process. This can be different from the directory where
the Dynare++ binary is located and different from the directory where
the model file is located.
Before all, we need to understand what variables are automatically
generated in Dynare++.
\subsection{Auxiliary Variables}
\label{aux_var}
Besides the endogenous variables declared in {\tt var} section,
Dynare++ might automatically add the following endogenous variables:
\halign{\vrule width0pt height14pt{\tt #}\hfil & \kern 3mm%
\vtop{\rightskip=0pt plus 5mm\noindent\hsize=9.5cm #}\cr
MULT{\it n}& A Lagrange multiplier of the optimal policy problem
associated with a constraint number {\it n} starting from zero.\cr
AUX\_{\it n1}\_{\it n2}\_{\it n3}& An auxiliary variable associated
with the last term in equation \eqref{planner_optim_foc2}. Since the
term is under $E_{t+k}$, we need the auxiliary variable be put back
in time. {\it n1} is a variable number starting from 0 in the declared
order with respect to which the term was differentiated, {\it n2} is a
number of constraint starting from 0, and finally {\it n3} is $k$
(time shift of the term).\cr
{\it endovar}\_p{\it K}& An auxiliary variable for bringing an
endogenous variable {\it endovar} back in time by $K$ periods. The
semantics of this variables is {\tt {\it endovar}\_p{\it K} = {\it
endovar}(+{\it K})}.\cr
{\it endovar}\_m{\it K}& An auxiliary variable for bringing an
endogenous variable {\it endovar} forward in time by $K$ periods. The
semantics of this variables is {\tt {\it endovar}\_m{\it K} = {\it
endovar}(-{\it K})}.\cr
{\it exovar}\_e& An auxiliary endogenous variable made equal to the
exogenous variable to allow for a semantical occurrence of the
exogenous variable at time other than $t$. The semantics of this
variables is {\tt {\it exovar}\_e = {\it exovar}}.\cr
AUXLD\_{\it n1}\_{\it n2}\_{\it n3}& An auxiliary variable for
bringing a non-linear term containing an occurrence of a variable
after $t+1$ to time $t+1$. {\it n1} is an equation number starting
from 0, {\it n2} is the non-linear sub-term number in the equation
starting from 0. {\it n3} is a time shift. For example, if the first
equation is the following:
\begin{verbatim}
X - Y*W(+1) + W(+2)*Z(+4) = 0;
\end{verbatim}
then it will be expanded as:
\begin{verbatim}
X - Y*W(+1) + AUXLD_0_2_3(+1) = 0;
AUXLD_0_2_1 = W(-1)*Z(+1);
AUXLD_0_2_2 = AUXLD_0_2_1(+1);
AUXLD_0_2_3 = AUXLD_0_2_2(+1);
\end{verbatim}
\cr
}
\subsection{MAT File}
\label{matfile}
The contents of the data file is depicted below. We
assume that the prefix is {\tt dyn}.
\halign{\vrule width0pt height14pt{\tt #}\hfil & \kern 3mm%
\vtop{\rightskip=0pt plus 5mm\noindent\hsize=7.5cm #}\cr
dyn\_nstat& Scalar. A number of static variables
(those occurring only at time $t$).\cr
dyn\_npred & Scalar. A number of variables occurring
at time $t-1$ and not at $t+1$.\cr
dyn\_nboth & Scalar. A number of variables occurring
at $t+1$ and $t-1$.\cr
dyn\_nforw & Scalar. A number of variables occurring
at $t+1$ and not at $t-1$.\cr
dyn\_vars & Column vector of endogenous variable
names in Dy\-na\-re++ internal ordering.\cr
dyn\_i\_{\it endovar} & Scalar. Index of a variable
named {\it endovar} in the {\tt dyn\_vars}.\cr
dyn\_shocks & Column vector of exogenous variable
names.\cr
dyn\_i\_{\it exovar} & Scalar. Index of a shock
named {\it exovar} in the {\tt dyn\_shocks}.\cr
dyn\_state\_vars & Column vector of state variables,
these are stacked variables counted by {\tt dyn\_\-npred}, {\tt
dyn\_\-nboth} and shocks.\cr
dyn\_vcov\_exo & Matrix $nexo\times nexo$. The
variance-covariance matrix of exogenous shocks as input in the model
file. The ordering is given by {\tt dyn\_shocks}.\cr
dyn\_mean & Column vector $nendo\times 1$. The
unconditional mean of endogenous variables. The ordering is given by
{\tt dyn\_vars}.\cr
dyn\_vcov & Matrix $nendo\times nendo$. The
unconditional covariance of endogenous variables. The ordering is given
by {\tt dyn\_vars}.\cr
dyn\_rt\_mean & Column vector $nendo\times 1$. The unconditional mean
of endogenous variables estimated in real-time. See
\ref{rt_simul}. The ordering is given by {\tt dyn\_vars}.\cr
dyn\_rt\_vcov & Matrix $nendo\times nendo$. The unconditional
covariance of endogenous variables estimated in real-time. See \ref{rt_simul}. The
ordering is given by {\tt dyn\_vars}.\cr
dyn\_cond\_mean & Matrix $nendo\times nper$. The rows correspond to
endogenous variables in the ordering of {\tt dyn\_vars}, the columns
to periods. If $t$ is a period (starting with 1), then $t$-th column
is $E[y_t|y_0=\bar y]$. See \ref{cond_dist}.\cr
dyn\_cond\_variance & Matrix $nendo\times nper$. The rows correspond
to endogenous variables in the ordering of {\tt dyn\_vars}, the
columns to periods. If $t$ is a period (starting with 1), then $t$-th
column are the variances of $y_t|y_0=\bar y$. See \ref{cond_dist}.\cr
dyn\_ss & Column vector $nendo\times 1$. The fix
point of the resulting approximation of the decision rule.\cr
dyn\_g\_{\it order} & Matrix $nendo\times ?$. A
derivative of the decision rule of the {\it order} multiplied by
$1/order!$. The rows correspond to endogenous variables in the
ordering of {\tt dyn\_vars}. The columns correspond to a
multidimensional index going through {\tt dyn\_state\_vars}. The data
is folded (all symmetrical derivatives are stored only once).\cr
dyn\_steady\_states & Matrix $nendo\times
nsteps+1$. A list of fix points at which the multi-step algorithm
calculated approximations. The rows correspond to endogenous variables
and are ordered by {\tt dyn\_vars}, the columns correspond to the
steps. The first column is always the deterministic steady state.\cr
dyn\_irfp\_{\it exovar}\_mean & Matrix
$nendo\times nper$. Positive impulse response to a shock named {\it
exovar}. The row ordering is given by {\tt dyn\_vars}. The columns
correspond to periods.\cr
dyn\_irfp\_{\it exovar}\_var & Matrix
$nendo\times nper$. The variances of positive impulse response
functions.\cr
dyn\_irfm\_{\it exovar}\_mean & Same as {\tt
dyn\_irfp\_}{\it exovar}{\tt \_mean} but for negative impulse.\cr
dyn\_irfp\_{\it exovar}\_var & Same as {\tt
dyn\_irfp\_}{\it exovar}{\tt \_var} but for negative impulse.\cr
dyn\_simul\_points & A simulation path along which the check was
done. Rows correspond to endogenous variables, columns to
periods. Appears only if {\tt --check P}.\cr
dyn\_simul\_errors & Errors along {\tt
dyn\_simul\_points}. The rows correspond to equations as stated in the
model file, the columns to the periods. Appears only if {\tt --check
P}\cr
dyn\_ellipse\_points & A set of points on the ellipse at which the
approximation was checked. Rows correspond to state endogenous
variables (the upper part of {\tt dyn\_state\_vars}, this means
without shocks), and columns correspond to periods. Appears only if
{\tt --check E}.\cr
dyn\_ellipse\_errors & Errors on the ellipse points {\tt
dyn\_ellipse\_points}. The rows correspond to the equations as stated
in the model file, columns to periods. Appears only if {\tt --check
E}.\cr
dyn\_shock\_{\it exovar}\_errors& Errors along a shock named {\it
exovar}. The rows correspond to the equations as stated in the model
file. There are $2m+1$ columns, the middle column is the error at zero
shock. The columns to the left correspond to negative values, columns
to the right to positive. Appears only if {\tt --check S}.\cr
}
\subsection{Journal File}
\label{journalfile}
The journal file provides information on resources usage during the
run and gives some informative messages. The journal file is a text
file, it is organized in single line records. The format of records is
documented in a header of the journal file.
The journal file should be consulted in the following circumstances:
\begin{itemize}
\item Something goes wrong. For example, if a model is not
Blanchard--Kahn stable, then the eigenvalues are dumped to the journal
file.
If the unconditional covariance matrix {\tt dyn\_vcov} is NaN, then
from the journal file you will know that all the simulations had to be
thrown away due to occurrence of NaN or Inf. This is caused by
non-stationarity of the resulting decision rule.
If Dynare++ crashes, the journal file can be helpful for guessing a
point where it crashed.
\item You are impatient. You might be looking at the journal file
during the run in order to have a better estimate about the time when
the calculations are finished. In Unix, I use a command {\tt tail -f
blah.jnl}.\footnote{This helps to develop one of the three
programmer's virtues: {\it impatience}. The other two are {\it
laziness} and {\it hubris}; according to Larry Wall.}
\item Heavy swapping. If the physical memory is not
sufficient, an operating system starts swapping memory pages with a
disk. If this is the case, the journal file can be consulted for
information on memory consumption and swapping activity.
\item Not sure what Dynare++ is doing. If so, read the journal file,
which contains a detailed record on what was calculated, simulated
etc.
\end{itemize}
\subsection{Dump File}
\label{dumpfile}
The dump file is always created with the suffix {\tt .dump}. It is a
text file which takes a form of a model file. It sets the parameter
values which were used, it has the initval section setting the values
which were finally used, and mainly it has a model section of all
equations with all substitutions and formed the first order conditions
of the planner.
The dump file serves for debugging purposes, since it contains the
mathematical problem which is being solved by dynare++.
\subsection{Matlab Scripts for Steady State Calculations}
\label{output_matlab_scripts}
This section describes two Matlab scripts, which are useful when
calculating the deterministic steady state outside Dynare++. The
scripts are created by Dynare++ as soon as an input file is parsed,
that is before any calculations.
The first Matlab script having a name {\tt {\it modname}\_f.m} for
given parameters values and given all endogenous variables $y$
calculates a residual of the static system. Supposing the model is in
the form of \eqref{focs}, the script calculates a vector:
\[
f(y,y,y,0)
\]
The second script having a name {\tt {\it modname}\_ff.m} calculates a matrix:
\[
\frac{\partial}{\partial y}f(y,y,y,0)
\]
Both scripts take two arguments. The first is a vector of parameter
values ordered in the same ordering as declared in the model file. The
second is a vector of all endogenous variables at which the evaluation
is performed. These endogenous variables also include auxiliary
variables automatically added by Dynare++ and Lagrange multipliers if
an optimal policy problem is solved. If no endogenous variable has not
been added by Dynare++, then the ordering is the same as the ordering
in declaration in the model file. If some endogenous variables have
been added, then the ordering can be read from comments close to the
top of either two files.
For example, if we want to calculate the deterministic steady state of
the {\tt kp1980.dyn} model, we need to do the following:
\begin{enumerate}
\item Run Dynare++ with {\tt kp1980.dyn}, no matter if the calculation
has not been finished, important output are the two Matlab scripts
created just in the beginning.
\item Consult file {\tt kp1980\_f.m}\ to get the ordering of parameters
and all endogenous variables.
\item Create a vector {\tt p} with the parameter values in the ordering
\item Create a vector {\tt init\_y} with the initial guess for the
Matlab solver {\tt fsolve}
\item Create a simple Matlab function called {\tt kp1980\_fsolve.m}\
returning the residual and Jacobian:
{\small
\begin{verbatim}
function [r, J] = kp1980_fsolve(p, y)
r = kp1980_f(p, y);
J = kp1980_ff(p, y);
\end{verbatim}
}
\item In the Matlab prompt, run the following:
{\small
\begin{verbatim}
opt=optimset('Jacobian','on','Display','iter');
y=fsolve(@(y) kp1980_fsolve(p,y), init_y, opt);
\end{verbatim}
}
\end{enumerate}
\subsection{Custom Simulations}
\label{custom}
When Dynare++ run is finished it dumps the derivatives of the
calculated decision rule to the MAT file. The derivatives can be used
for a construction of the decision rule and custom simulations can be
run. This is done by {\tt dynare\_simul.m} M-file in Matlab. It reads
the derivatives and simulates the decision rule with provided
realization of shocks.
All the necessary documentation can be viewed by the command:
{\small
\begin{verbatim}
help dynare_simul
\end{verbatim}
}
\end{document}
RINTERNALS=/usr/share/R/include/
sylvcppsource := $(wildcard ../../sylv/cc/*.cpp)
sylvhsource := $(wildcard ../../sylv/cc/*.h)
sylvobjects := $(patsubst %.cpp, %.o, $(sylvcppsource))
tlcwebsource := $(wildcard ../../tl/cc/*.cweb)
tlcppsource := $(patsubst %.cweb,%.cpp,$(tlcwebsource))
tlhwebsource := $(wildcard ../../tl/cc/*.hweb)
tlhsource := $(patsubst %.hweb,%.h,$(tlhwebsource))
tlobjects := $(patsubst %.cweb,%.o,$(tlcwebsource))
kordcwebsource := $(wildcard ../../kord/*.cweb)
kordcppsource := $(patsubst %.cweb,%.cpp,$(kordcwebsource))
kordhwebsource := $(wildcard ../../kord/*.hweb)
kordhsource := $(patsubst %.hweb,%.h,$(kordhwebsource))
kordobjects := $(patsubst %.cweb,%.o,$(kordcwebsource))
integcwebsource := $(wildcard ../../integ/cc/*.cweb)
integcppsource := $(patsubst %.cweb,%.cpp,$(integcwebsource))
integhwebsource := $(wildcard ../../integ/cc/*.hweb)
integhsource := $(patsubst %.hweb,%.h,$(integhwebsource))
integobjects := $(patsubst %.cweb,%.o,$(integcwebsource))
parserhsource := $(wildcard ../../parser/cc/*.h)
parsercppsource := $(wildcard ../parser/cc/*.cpp)
utilshsource := $(wildcard ../../utils/cc/*.h)
utilscppsource := $(wildcard ../utils/cc/*.cpp)
srccpp := dynare3.cpp dynare_model.cpp planner_builder.cpp dynare_atoms.cpp dynare_params.cpp nlsolve.cpp
objects := $(patsubst %.cpp,../../src/%.o,$(srccpp)) \
$(patsubst %.y,%_ll.o,$(wildcard ../../src/*.y)) \
$(patsubst %.lex,%_tab.o,$(wildcard ../../src/*.lex))
PKG_CPPFLAGS= -I../../tl/cc -I../../sylv/cc -I../../kord -I../../src -I../.. -I$(RINTERNALS)
PKG_LIBS= ${LAPACK_LIBS} ${BLAS_LIBS} $(objects) $(kordobjects) $(integobjects) $(tlobjects) ../../parser/cc/parser.a ../../utils/cc/utils.a $(sylvobjects) -lpthread -llapack -lcblas -lf77blas -latlas -lg2c -lstdc++
ifneq ($(LD_LIBRARY_PATH),) # use LD_LIBRARY_PATH from environment
PKG_LIBS := -Wl,--library-path $(LD_LIBRARY_PATH) $(PKG_LIBS)
endif
dynareR.so: dynareR.o
g++ -shared -o dynareR.so dynareR.o -L/usr/lib/R/lib -lR $(PKG_LIBS)
dynareR.o: dynareR.cpp
g++ -I/usr/share/R/include -I/usr/share/R/include $(PKG_CPPFLAGS) \
-fpic -g -O2 -c dynareR.cpp -o dynareR.o -DDEBUG
test: test.cpp dynareR.cpp
g++ -O0 -g -o test test.cpp -DDEBUG $(PKG_LIBS) $(PKG_CPPFLAGS)
test-debug:
valgrind --leak-check=yes ./test
COMPILING
The makefile for this interface is still preliminary, I will write a decent
one when I have the time. It needs all the compiled files from dynare++,
but doesn't know how to make them. So first you need to run make in the
src/ directory, then run make in extern/R.
You need Rinternals.h to make this file. If you are not using prepackaged R
on Unix/Linux, you need to modify the variable RINCLUDE in the Makefile
accordingly.
To compile dynare++, read doc/compiling-notes.txt.
INSTALLATION
Copy the dynareR.r and dynareR.so files to your working directory so that R
can find them.