dynare issueshttps://git.dynare.org/Dynare/dynare/-/issues2019-11-14T17:38:13Zhttps://git.dynare.org/Dynare/dynare/-/issues/1389Check detrending engine2019-11-14T17:38:13ZJohannes Pfeifer Check detrending engineThe mod-file
```
//-----------------------------------------------------------------------//
//---------------------- Declaring parameters ---------------------------//
//-----------------------------------------------------------------------//
parameters delta //depreciation
sigma //intertemporal elasticity
beta //discount factor
alpha //production function parameter
mu //utility parameter
theta //Calvo parameter
epsilon //elasticity
chi //indexation parameter (unused for now)
;
alpha = 0.667;
delta = 0.1;
sigma = 0.25;
beta = 0.96;
mu = 0.2;
theta = 0.5;
epsilon = 15;
chi = 0;
//-----------------------------------------------------------------------//
//----------------------- Declaring variables ---------------------------//
//-----------------------------------------------------------------------//
varexo omega //probability of remaining a worker in the next period
gamma //probability of dieing (once retired)
n //populational growth
x //rate of technological change
M_d //exogenous money supply
;
var lambda //asset distribution in the economy
pi //}these define the marginal propensity of consumption
eps //}by both retirees and workers (I'm using Gertler's notation)
OMEGA //higher case omega
R //gross interest rate
PSI //auxiliar variable
mc //marginal cost
Pi //inflation
Df //nominal dividends
Pf //nominal firm share price
price_disp //price dispersion index
;
//declaring nonstationary variables
trend_var(growth_factor= (1+x)*(1+n)/(1+n)) X; //technological progress
trend_var(growth_factor= (1+x)*(1+n)/(1+x)) N; //population
var(deflator = X*N)
Y //product
C //consumption
K //financial capital
H //non-financial capital
A //assets
;
var(deflator = X) W; //real wage
var(deflator = 1/(X*N)) P PStar; //price level and optimal price set
var(deflator = (X*N)^(1-epsilon)) g1; //auxiliary Calvo variable
var(deflator = (X*N)^(2-epsilon)) g2; //auxiliary Calvo variable
predetermined_variables K; //timing convention
//-----------------------------------------------------------------------//
//------------------------------- Model ---------------------------------//
//-----------------------------------------------------------------------//
model;
// Consumer side
//1
K(+1) = Y - C + (1 - delta)* K;
//2
(lambda - (1 - omega(+1)))*A = omega(+1)*(1-eps*pi)*lambda(-1)*R*A(-1);
//3
pi = 1 - PSI(+1) * (R(+1) * OMEGA(+1))^(sigma - 1) * beta^sigma * pi/pi(+1);
//4
eps * pi = 1 - PSI(+1) * ((R(+1))^(sigma-1)*beta^sigma*gamma(+1))*(eps*pi)/(eps(+1)*pi(+1));
//5
OMEGA = omega + (1-omega)*eps^(1/(1-sigma));
//6
H = N * W + H(+1)/((1+n(+1))*(1+x(+1))*R(+1)*OMEGA(+1)/omega(+1));
//7
C * (1 + mu^sigma * (R(+1)*Pi(+1)/(R(+1)*Pi(+1)-1))^(sigma-1)) = pi * ((1 + (eps/gamma-1) * lambda(-1)) * R * A(-1) + H(-1));
//8
A = K + 1/R * M_d(+1)/P + Pf/P;
//9
PSI = (1 + ((R*Pi-1)/(R*Pi))^(sigma-1) * mu^sigma)^(-1) / (1 + (((R(+1)*Pi(+1))-1)/(R(+1)*Pi(+1)))^(sigma-1) * mu^sigma)^(-1);
// Firm side
//10
Y = (X * N)^alpha * (K)^(1-alpha)/price_disp;
//11
W = alpha * Y / N * mc;
//12
R = (1 - alpha) * Y / K * mc + 1 - delta;
//13
mc = (1/(1-alpha))^(1-alpha)*(1/alpha)^alpha*(W/X)^(1-alpha)*(R-(1-delta))^alpha;
// Calvo pricing
//14
PStar = epsilon/(epsilon-1) * g1/g2;
//15
g1 = P^epsilon * Y * mc + theta*beta * g1(+1);
//16
g2 = P^(epsilon-1) * Y + theta*beta * g2(+1);
//17
P = (theta * P(-1)^(1-epsilon) + (1-theta) * PStar^(1-epsilon))^(1/(1-epsilon));
//18
price_disp = theta*(PStar/P)^(-epsilon)*(P/P(-1))^(epsilon) + (1-theta)*(P/P(-1))^(epsilon)*price_disp(-1);
//19
Pi = P/P(-1);
// Dividends and share prices
Df = P * Y *(1-mc);
Pf(+1) + Df(+1) = R(+1) * Pf;
end;
//-----------------------------------------------------------------------//
//--------------------------- Initial Values ----------------------------//
//-----------------------------------------------------------------------//
initval;
M_d = 1; P =1;
x = 0.01;
n = 0.01;
omega = 0.97;
gamma = 0.9;
lambda =0.3878;
pi =0.2394;
eps =1.2832;
OMEGA =1.0124;
R =1.3968;
PSI =0.9988;
mc =0.9064;
Y =0.7407;
C =0.6857;
K =0.459;
H =1.4279;
A =1.365;
P =0.972;
PStar =0.9364;
W =0.448;
Pi =0.98;
Df =0.0681;
Pf =0.1732;
price_disp=1.0336;
g1 =0.601;
g2 =0.6877;
end;
model_diagnostics;
steady;
endval;
gamma = 0.94;
end;
steady;
simul(periods=300);
```
from http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=14206 does not run with
```
ERROR: the second-order cross partial of equation 14 w.r.t. trend variable X and endogenous variable PStar is not null.
```
but the relevant equation
```
PStar = epsilon/(epsilon-1) * g1/g2;
```
should have the trends specified (as far as I can see). `g1/g2=((X*N)^(1-epsilon))/((X*N)^(2-epsilon)=(XN)^(-1)`, which is the trend for `PStar`.The mod-file
```
//-----------------------------------------------------------------------//
//---------------------- Declaring parameters ---------------------------//
//-----------------------------------------------------------------------//
parameters delta //depreciation
sigma //intertemporal elasticity
beta //discount factor
alpha //production function parameter
mu //utility parameter
theta //Calvo parameter
epsilon //elasticity
chi //indexation parameter (unused for now)
;
alpha = 0.667;
delta = 0.1;
sigma = 0.25;
beta = 0.96;
mu = 0.2;
theta = 0.5;
epsilon = 15;
chi = 0;
//-----------------------------------------------------------------------//
//----------------------- Declaring variables ---------------------------//
//-----------------------------------------------------------------------//
varexo omega //probability of remaining a worker in the next period
gamma //probability of dieing (once retired)
n //populational growth
x //rate of technological change
M_d //exogenous money supply
;
var lambda //asset distribution in the economy
pi //}these define the marginal propensity of consumption
eps //}by both retirees and workers (I'm using Gertler's notation)
OMEGA //higher case omega
R //gross interest rate
PSI //auxiliar variable
mc //marginal cost
Pi //inflation
Df //nominal dividends
Pf //nominal firm share price
price_disp //price dispersion index
;
//declaring nonstationary variables
trend_var(growth_factor= (1+x)*(1+n)/(1+n)) X; //technological progress
trend_var(growth_factor= (1+x)*(1+n)/(1+x)) N; //population
var(deflator = X*N)
Y //product
C //consumption
K //financial capital
H //non-financial capital
A //assets
;
var(deflator = X) W; //real wage
var(deflator = 1/(X*N)) P PStar; //price level and optimal price set
var(deflator = (X*N)^(1-epsilon)) g1; //auxiliary Calvo variable
var(deflator = (X*N)^(2-epsilon)) g2; //auxiliary Calvo variable
predetermined_variables K; //timing convention
//-----------------------------------------------------------------------//
//------------------------------- Model ---------------------------------//
//-----------------------------------------------------------------------//
model;
// Consumer side
//1
K(+1) = Y - C + (1 - delta)* K;
//2
(lambda - (1 - omega(+1)))*A = omega(+1)*(1-eps*pi)*lambda(-1)*R*A(-1);
//3
pi = 1 - PSI(+1) * (R(+1) * OMEGA(+1))^(sigma - 1) * beta^sigma * pi/pi(+1);
//4
eps * pi = 1 - PSI(+1) * ((R(+1))^(sigma-1)*beta^sigma*gamma(+1))*(eps*pi)/(eps(+1)*pi(+1));
//5
OMEGA = omega + (1-omega)*eps^(1/(1-sigma));
//6
H = N * W + H(+1)/((1+n(+1))*(1+x(+1))*R(+1)*OMEGA(+1)/omega(+1));
//7
C * (1 + mu^sigma * (R(+1)*Pi(+1)/(R(+1)*Pi(+1)-1))^(sigma-1)) = pi * ((1 + (eps/gamma-1) * lambda(-1)) * R * A(-1) + H(-1));
//8
A = K + 1/R * M_d(+1)/P + Pf/P;
//9
PSI = (1 + ((R*Pi-1)/(R*Pi))^(sigma-1) * mu^sigma)^(-1) / (1 + (((R(+1)*Pi(+1))-1)/(R(+1)*Pi(+1)))^(sigma-1) * mu^sigma)^(-1);
// Firm side
//10
Y = (X * N)^alpha * (K)^(1-alpha)/price_disp;
//11
W = alpha * Y / N * mc;
//12
R = (1 - alpha) * Y / K * mc + 1 - delta;
//13
mc = (1/(1-alpha))^(1-alpha)*(1/alpha)^alpha*(W/X)^(1-alpha)*(R-(1-delta))^alpha;
// Calvo pricing
//14
PStar = epsilon/(epsilon-1) * g1/g2;
//15
g1 = P^epsilon * Y * mc + theta*beta * g1(+1);
//16
g2 = P^(epsilon-1) * Y + theta*beta * g2(+1);
//17
P = (theta * P(-1)^(1-epsilon) + (1-theta) * PStar^(1-epsilon))^(1/(1-epsilon));
//18
price_disp = theta*(PStar/P)^(-epsilon)*(P/P(-1))^(epsilon) + (1-theta)*(P/P(-1))^(epsilon)*price_disp(-1);
//19
Pi = P/P(-1);
// Dividends and share prices
Df = P * Y *(1-mc);
Pf(+1) + Df(+1) = R(+1) * Pf;
end;
//-----------------------------------------------------------------------//
//--------------------------- Initial Values ----------------------------//
//-----------------------------------------------------------------------//
initval;
M_d = 1; P =1;
x = 0.01;
n = 0.01;
omega = 0.97;
gamma = 0.9;
lambda =0.3878;
pi =0.2394;
eps =1.2832;
OMEGA =1.0124;
R =1.3968;
PSI =0.9988;
mc =0.9064;
Y =0.7407;
C =0.6857;
K =0.459;
H =1.4279;
A =1.365;
P =0.972;
PStar =0.9364;
W =0.448;
Pi =0.98;
Df =0.0681;
Pf =0.1732;
price_disp=1.0336;
g1 =0.601;
g2 =0.6877;
end;
model_diagnostics;
steady;
endval;
gamma = 0.94;
end;
steady;
simul(periods=300);
```
from http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=14206 does not run with
```
ERROR: the second-order cross partial of equation 14 w.r.t. trend variable X and endogenous variable PStar is not null.
```
but the relevant equation
```
PStar = epsilon/(epsilon-1) * g1/g2;
```
should have the trends specified (as far as I can see). `g1/g2=((X*N)^(1-epsilon))/((X*N)^(2-epsilon)=(XN)^(-1)`, which is the trend for `PStar`.4.6Sébastien VillemotSébastien Villemothttps://git.dynare.org/Dynare/dynare/-/issues/1377Decide on treatment of qz_criterium in estimation with particle filter2019-06-19T15:37:45ZJohannes Pfeifer Decide on treatment of qz_criterium in estimation with particle filterCurrently, if a unit root is present, estimation with `order=2` will result in an error. Using `diffuse_filter` will disable the check, but obviously makes no sense for particle filtering. See http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=13126Currently, if a unit root is present, estimation with `order=2` will result in an error. Using `diffuse_filter` will disable the check, but obviously makes no sense for particle filtering. See http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=131264.6https://git.dynare.org/Dynare/dynare/-/issues/1093Discuss changing options_.hp_ngrid2020-01-20T17:41:57ZJohannes Pfeifer Discuss changing options_.hp_ngridThe option `options_.hp_ngrid` is now badly named after introducing a `bandpass_filter` option. It actually governs the number of points used in the inverse Fourier transform for any filter (see #1011).
I would suggest creating a new option `ifft_points` in the options structure and the preprocessor. For backward compatibility we should keep `hp_ngrid` in the preprocesor, but have it now map into `options_.ifft_points`
The option `options_.hp_ngrid` is now badly named after introducing a `bandpass_filter` option. It actually governs the number of points used in the inverse Fourier transform for any filter (see #1011).
I would suggest creating a new option `ifft_points` in the options structure and the preprocessor. For backward compatibility we should keep `hp_ngrid` in the preprocesor, but have it now map into `options_.ifft_points`
4.6Sébastien VillemotSébastien Villemothttps://git.dynare.org/Dynare/dynare/-/issues/1775Document behavior of smoother2histval (or change it)2021-02-18T10:08:24ZJohannes Pfeifer Document behavior of smoother2histval (or change it)From what I can see, the `smoother` will only store the `M_.orig_endo_nbr` variables for Bayesian estimation. So calling `smoother2histval` will "forget" about the auxiliary variables we introduce. Thus, by convention they will be initialized at their steady state - which differs from the description in the manual that it
>will use these values to construct initial conditions (as if they had been manually entered through histval).
In case of `histval`, the non-mentioned variables would be set to 0.
In contrast, for ML it will work.From what I can see, the `smoother` will only store the `M_.orig_endo_nbr` variables for Bayesian estimation. So calling `smoother2histval` will "forget" about the auxiliary variables we introduce. Thus, by convention they will be initialized at their steady state - which differs from the description in the manual that it
>will use these values to construct initial conditions (as if they had been manually entered through histval).
In case of `histval`, the non-mentioned variables would be set to 0.
In contrast, for ML it will work.4.7https://git.dynare.org/Dynare/dynare/-/issues/1770Document optimizers allowing analytic_derivation and fix bugs2021-02-18T15:39:35ZJohannes Pfeifer Document optimizers allowing analytic_derivation and fix bugs- [x] The manual does not state which optimizers employ the analytic gradient. It should be at least `fmincon` (1), `csminwel` (4), `newrat` (5).
- [x] `csminwel` and `newrat` are the two optimizers not shipped with e.g. Matlab. They rely on calls to
```[~,cost_flag,g1] = penalty_objective_function(x1,fcn,penalty,varargin{:});```
where the third output argument is the gradient. Within `penalty_objective_function` we then have
```
[fval, info, exit_flag, arg1, arg2] = fcn(x, varargin{:});
```
where the gradient `arg1` is the fourth and the Hessian `arg2` the fifth output argument of the underlying objective function. For example
```
function [fval,info,exit_flag,DLIK,Hess,SteadyState,trend_coeff,Model,DynareOptions,BayesInfo,DynareResults] = dsge_likelihood(xparam1,DynareDataset,DatasetInfo,DynareOptions,Model,EstimatedParameters,BayesInfo,BoundsInfo,DynareResults,derivatives_info)
```
This interface creates a problem for Matlab optimizers like `fmincon`, which expect the Jacobian as second and the Hessian as the third function output. Neither the underlying objective nor `penalty_objective_function` conform to this convention. The question is how to address this issue. There are two ways:
1. Add a wrapper function along the line of `penalty_objective_function`, which introduces another layer but would be quite easy to implement.
2. Change the output order of the objective function. This would be cleaner and more efficient, but would require quite massive changes in the code-base
- [x] The current implementation of `analytic_derivation` for `fmincon` in `dynare_minimize_objective` is buggy as it considers the second output `info` to be the gradient.
- [ ] It's not clear whether the treatment of the analytic Jacobian in case of the penalty approach is correct as the Jacobian does not take the penalty into account. We need to check whether these cases are filtered out via `cost_flag`- [x] The manual does not state which optimizers employ the analytic gradient. It should be at least `fmincon` (1), `csminwel` (4), `newrat` (5).
- [x] `csminwel` and `newrat` are the two optimizers not shipped with e.g. Matlab. They rely on calls to
```[~,cost_flag,g1] = penalty_objective_function(x1,fcn,penalty,varargin{:});```
where the third output argument is the gradient. Within `penalty_objective_function` we then have
```
[fval, info, exit_flag, arg1, arg2] = fcn(x, varargin{:});
```
where the gradient `arg1` is the fourth and the Hessian `arg2` the fifth output argument of the underlying objective function. For example
```
function [fval,info,exit_flag,DLIK,Hess,SteadyState,trend_coeff,Model,DynareOptions,BayesInfo,DynareResults] = dsge_likelihood(xparam1,DynareDataset,DatasetInfo,DynareOptions,Model,EstimatedParameters,BayesInfo,BoundsInfo,DynareResults,derivatives_info)
```
This interface creates a problem for Matlab optimizers like `fmincon`, which expect the Jacobian as second and the Hessian as the third function output. Neither the underlying objective nor `penalty_objective_function` conform to this convention. The question is how to address this issue. There are two ways:
1. Add a wrapper function along the line of `penalty_objective_function`, which introduces another layer but would be quite easy to implement.
2. Change the output order of the objective function. This would be cleaner and more efficient, but would require quite massive changes in the code-base
- [x] The current implementation of `analytic_derivation` for `fmincon` in `dynare_minimize_objective` is buggy as it considers the second output `info` to be the gradient.
- [ ] It's not clear whether the treatment of the analytic Jacobian in case of the penalty approach is correct as the Jacobian does not take the penalty into account. We need to check whether these cases are filtered out via `cost_flag`4.7https://git.dynare.org/Dynare/dynare/-/issues/1713Decide minimal MATLAB version requirement for Dynare 4.72020-09-03T14:45:14ZSébastien VillemotDecide minimal MATLAB version requirement for Dynare 4.7We need to decide what will be the minimal version of MATLAB required to run Dynare 4.7.
For Dynare 4.6, our policy has been to support MATLAB versions that are at most 10 years old.
Assuming that Dynare 4.7 is released in 2021, keeping the same policy would imply raising the minimal MATLAB version to R2011a or R2011b (depending on the exact release date).
But we could also decide to change our policy and support less versions. For example, we could go for a 5-years windows, which would imply R2016a or R2016b. Or we could choose something between 5 and 10 years.
Relevant to this discussion is the list of [MATLAB incompatibilities across versions](https://git.dynare.org/Dynare/dynare/-/wikis/MATLAB-Versions). Here are the main benefits that would bring certain requirements:
- *R2013a*: we could get rid of the hack needed to support `intersect(…, 'stable')`
- *R2014a*: we could easily install the minimal required MATLAB version on modern GNU/Linux systems (currently we need a hack to create an artificial `eth0` device with the correct MAC address)
- *R2016a*: we could drop the support for 32-bit versions, which would halve the size of the Windows installer and simplify the build process
- *R2016b*: we could use automatic broadcasting in many places, instead of the obscure `bsxfun` syntaxWe need to decide what will be the minimal version of MATLAB required to run Dynare 4.7.
For Dynare 4.6, our policy has been to support MATLAB versions that are at most 10 years old.
Assuming that Dynare 4.7 is released in 2021, keeping the same policy would imply raising the minimal MATLAB version to R2011a or R2011b (depending on the exact release date).
But we could also decide to change our policy and support less versions. For example, we could go for a 5-years windows, which would imply R2016a or R2016b. Or we could choose something between 5 and 10 years.
Relevant to this discussion is the list of [MATLAB incompatibilities across versions](https://git.dynare.org/Dynare/dynare/-/wikis/MATLAB-Versions). Here are the main benefits that would bring certain requirements:
- *R2013a*: we could get rid of the hack needed to support `intersect(…, 'stable')`
- *R2014a*: we could easily install the minimal required MATLAB version on modern GNU/Linux systems (currently we need a hack to create an artificial `eth0` device with the correct MAC address)
- *R2016a*: we could drop the support for 32-bit versions, which would halve the size of the Windows installer and simplify the build process
- *R2016b*: we could use automatic broadcasting in many places, instead of the obscure `bsxfun` syntax4.7Sébastien VillemotSébastien Villemothttps://git.dynare.org/Dynare/dynare/-/issues/1332Decide on how to deal with mh_recover on Octave2019-06-19T15:37:45ZJohannes Pfeifer Decide on how to deal with mh_recover on OctaveVarious unit test fail on Octave, because the `mh_recover` option does not work properly under Octave as there are differences in setting the random number generator. We can either
- disable the check in the unit test and accept that the behavior of `mh_recover` is different under Octave and Matlab (and then document this)
- or provide an error under Octave when someone tries to use this optionVarious unit test fail on Octave, because the `mh_recover` option does not work properly under Octave as there are differences in setting the random number generator. We can either
- disable the check in the unit test and accept that the behavior of `mh_recover` is different under Octave and Matlab (and then document this)
- or provide an error under Octave when someone tries to use this option4.5https://git.dynare.org/Dynare/dynare/-/issues/670Fix handling of prefiltering and trends in non_linear_dsge_likelihood2019-06-19T15:38:08ZJohannes Pfeifer Fix handling of prefiltering and trends in non_linear_dsge_likelihood`non_linear_dsge_likelihood` uses
`Y = transpose(DynareDataset.rawdata);`
By accessing `rawdata` instead of data, prefiltering is ignored. Moreover, deterministic trends are not subtracted. I am not sure this is on purpose.
`non_linear_dsge_likelihood` uses
`Y = transpose(DynareDataset.rawdata);`
By accessing `rawdata` instead of data, prefiltering is ignored. Moreover, deterministic trends are not subtracted. I am not sure this is on purpose.
4.5https://git.dynare.org/Dynare/dynare/-/issues/667rename hessian.m in dyn_hessian.m2019-06-19T15:38:08ZMichelJuillardrename hessian.m in dyn_hessian.mIn order to avoid name collision with other Matlab program/toolboxe
In order to avoid name collision with other Matlab program/toolboxe
4.5https://git.dynare.org/Dynare/dynare/-/issues/573Fix bug in interaction of diffuse filter and stoch_simul2019-06-19T15:38:12ZJohannes Pfeifer Fix bug in interaction of diffuse filter and stoch_simulSee http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5248
See http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5248
4.5MichelJuillardMichelJuillardhttps://git.dynare.org/Dynare/dynare/-/issues/534Discuss allowed use of endogenous variables outside of model-block2019-06-19T15:38:14ZJohannes Pfeifer Discuss allowed use of endogenous variables outside of model-blockConsider the mod-file
```
var y, c, k, a, h, b;
varexo e, u;
parameters beta, rho, alpha, delta, theta, psi, tau test;
alpha = 0.36;
rho = 0.95;
tau = 0.025;
beta = 0.99;
delta = 0.025;
psi = 0;
theta = 2.95;
phi = 0.1;
test=y*beta;
model;
c*theta*h^(1+psi)=(1-alpha)*y;
k = beta*(((exp(b)*c)/(exp(b(+1))*c(+1)))
*(exp(b(+1))*alpha*y(+1)+(1-delta)*k));
y = exp(a)*(k(-1)^alpha)*(h^(1-alpha));
k = exp(b)*(y-c)+(1-delta)*k(-1);
a = rho*a(-1)+tau*b(-1) + e;
b = tau*a(-1)+rho*b(-1) + u;
end;
initval;
y = 1.08068253095672;
c = 0.80359242014163;
h = 0.29175631001732;
k = 11.08360443260358;
a = 0;
b = 0;
e = 0;
u = 0;
end;
shocks;
var e; stderr 0.009;
var u; stderr 0.009;
var e, u = phi*0.009*0.009;
end;
stoch_simul;
```
The definition
`test=y*beta;`
results in the preprocessor plugging in for the endogenous variable `y` with its yet uncomputed steady state, i.e. 0. Users thus can create a circular problem with the steady state of `y` depending on `test` with `test` depending on `y` - and would never notice the problem, because the definition does not result in an error. Would it be possible to block this behavior? If we want to allow users to access the steady state value of endogenous variables outside of the model-block, it should be through the `steady_state`operator.
Or do I miss something that makes this behavior desirable?
Consider the mod-file
```
var y, c, k, a, h, b;
varexo e, u;
parameters beta, rho, alpha, delta, theta, psi, tau test;
alpha = 0.36;
rho = 0.95;
tau = 0.025;
beta = 0.99;
delta = 0.025;
psi = 0;
theta = 2.95;
phi = 0.1;
test=y*beta;
model;
c*theta*h^(1+psi)=(1-alpha)*y;
k = beta*(((exp(b)*c)/(exp(b(+1))*c(+1)))
*(exp(b(+1))*alpha*y(+1)+(1-delta)*k));
y = exp(a)*(k(-1)^alpha)*(h^(1-alpha));
k = exp(b)*(y-c)+(1-delta)*k(-1);
a = rho*a(-1)+tau*b(-1) + e;
b = tau*a(-1)+rho*b(-1) + u;
end;
initval;
y = 1.08068253095672;
c = 0.80359242014163;
h = 0.29175631001732;
k = 11.08360443260358;
a = 0;
b = 0;
e = 0;
u = 0;
end;
shocks;
var e; stderr 0.009;
var u; stderr 0.009;
var e, u = phi*0.009*0.009;
end;
stoch_simul;
```
The definition
`test=y*beta;`
results in the preprocessor plugging in for the endogenous variable `y` with its yet uncomputed steady state, i.e. 0. Users thus can create a circular problem with the steady state of `y` depending on `test` with `test` depending on `y` - and would never notice the problem, because the definition does not result in an error. Would it be possible to block this behavior? If we want to allow users to access the steady state value of endogenous variables outside of the model-block, it should be through the `steady_state`operator.
Or do I miss something that makes this behavior desirable?
4.5https://git.dynare.org/Dynare/dynare/-/issues/504Discuss and document the load_mh_file option2019-06-19T15:38:16ZJohannes Pfeifer Discuss and document the load_mh_file optionThe current behavior of the `load_mh_file`-option with it recomputing the mode, the Hessian, and the scale-factor by default seems counterintuitive. Given the fixed seed, one should get the same results, but there is the risk of them changing between the reloaded chain and the new elements to be added and thus having a chain with difffering proposal densities. We should at least document this behavior and potentially reload the mode-file by default. See also http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5051
The current behavior of the `load_mh_file`-option with it recomputing the mode, the Hessian, and the scale-factor by default seems counterintuitive. Given the fixed seed, one should get the same results, but there is the risk of them changing between the reloaded chain and the new elements to be added and thus having a chain with difffering proposal densities. We should at least document this behavior and potentially reload the mode-file by default. See also http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5051
4.5https://git.dynare.org/Dynare/dynare/-/issues/332Add functionality to compute function on posterior2015-10-12T05:18:33ZJohannes Pfeifer Add functionality to compute function on posteriorCurrently, it is very hard to compute a user defined function on the posterior draws. A user asked me if we could add a functionality where the user specifies a function name which is executed on each posterior draw when we compute posterior moments. For such a function call, we would need to clearly specify the interface. We might want to provide M_, oo_, bayestopt_, estim_params_ and the current parameter draw as generic inputs (and nothing else). The tricky part consists of specifying the output arguments and making sure we can save them. One way would be to only accept one output argument and save it to the ii element of a cell array. By putting multiple outputs into a structure, the user could still effectively save multiple outputs while keeping a clearly specified interface for us.
Upon going over all draws, we would just save the cell array to the disk. It would be up to the user to make sense of the saved stuff.
Currently, it is very hard to compute a user defined function on the posterior draws. A user asked me if we could add a functionality where the user specifies a function name which is executed on each posterior draw when we compute posterior moments. For such a function call, we would need to clearly specify the interface. We might want to provide M_, oo_, bayestopt_, estim_params_ and the current parameter draw as generic inputs (and nothing else). The tricky part consists of specifying the output arguments and making sure we can save them. One way would be to only accept one output argument and save it to the ii element of a cell array. By putting multiple outputs into a structure, the user could still effectively save multiple outputs while keeping a clearly specified interface for us.
Upon going over all draws, we would just save the cell array to the disk. It would be up to the user to make sense of the saved stuff.
4.5https://git.dynare.org/Dynare/dynare/-/issues/314Should correlation field of oo_.PosteriorTheoreticalMoments be renamed to aut...2013-03-27T13:13:12ZSébastien VillemotShould correlation field of oo_.PosteriorTheoreticalMoments be renamed to autocorrelation?The current naming is confusing…
The current naming is confusing…
4.4https://git.dynare.org/Dynare/dynare/-/issues/1730Consider saving results using -v7.3 flag2020-11-13T13:08:07ZJohannes Pfeifer Consider saving results using -v7.3 flagWe may want to consider saving the results of a Dynare run using the `-v7.3` flag, which available after Matlab R2006b. That would help getting around occasional issues with the 2GB file limit otherwise present.We may want to consider saving the results of a Dynare run using the `-v7.3` flag, which available after Matlab R2006b. That would help getting around occasional issues with the 2GB file limit otherwise present.https://git.dynare.org/Dynare/dynare/-/issues/1710num_procs doesn't exist in Matalb R2019b anymore2020-02-22T17:49:35ZMichelJuillardnum_procs doesn't exist in Matalb R2019b anymoreThere is no function ``num_procs``in Matlab release 2019b
``num_procs`` is used in ``default_option_values.m`` to initialize number of threads for ``kronecker.sparse_hessian_times_B_kronecker_C``, ``perfect_foresight_problem`` and ``k_order_perturbation``
A possible alternative would be an undocumented Matlab feature: ``feature('numcores')``
Undocumented Matlab features are discussed in this old document: http://undocumentedmatlab.com/articles/undocumented-feature-function/ but is still working.There is no function ``num_procs``in Matlab release 2019b
``num_procs`` is used in ``default_option_values.m`` to initialize number of threads for ``kronecker.sparse_hessian_times_B_kronecker_C``, ``perfect_foresight_problem`` and ``k_order_perturbation``
A possible alternative would be an undocumented Matlab feature: ``feature('numcores')``
Undocumented Matlab features are discussed in this old document: http://undocumentedmatlab.com/articles/undocumented-feature-function/ but is still working.https://git.dynare.org/Dynare/dynare/-/issues/1414command options should be made local, and a new syntax should provide persist...2020-05-07T17:45:59ZHoutan Bastanicommand options should be made local, and a new syntax should provide persistent optionsAllow users the possibility to bypass the current situation where an option set in one command is perpetuated into other commands when the user doesn't explicitly pass the option again. e.g. In the following case, the second call to `command` will have options 1, 2, and 3 set even though only 1 and 3 were passed:
```
command(option1, option2);
command(option1, option3);
```
Introduce a new syntax such as
```
command(option1, option2);
command!(option1, option3);
```
which would tell the preprocessor to reset all command-specific options to their defaults before writing output. To do this, every command's options must be local to a substructure of `options_` (i.e. `options_.command.option1`, `options_.command.option2`, etc.)Allow users the possibility to bypass the current situation where an option set in one command is perpetuated into other commands when the user doesn't explicitly pass the option again. e.g. In the following case, the second call to `command` will have options 1, 2, and 3 set even though only 1 and 3 were passed:
```
command(option1, option2);
command(option1, option3);
```
Introduce a new syntax such as
```
command(option1, option2);
command!(option1, option3);
```
which would tell the preprocessor to reset all command-specific options to their defaults before writing output. To do this, every command's options must be local to a substructure of `options_` (i.e. `options_.command.option1`, `options_.command.option2`, etc.)https://git.dynare.org/Dynare/dynare/-/issues/1373Decide on whether evalute_smoother (and others) should set M_.params2019-06-19T15:37:45ZJohannes Pfeifer Decide on whether evalute_smoother (and others) should set M_.paramsThis picks up the discussion in https://github.com/DynareTeam/dynare/pull/1372#issuecomment-271336355
I agree with @MichelJuillard that changing `M_.params` in various functions is a bad idea as it makes it hard to trace what was going on in the mod-file. At the same time, I agree with @rattoma that keeping `M_.params` unchanged when calling `evaluate_smoother` would imply a break with previous versions and therefore mean a loss in backward compatibility. I could live with that, but usually @MichelJuillard prefers to be cautious in the regard. If we want to preserve backward compatibility, we need to make sure that `shock_decomposition` returns `M_` from `evaluate_smoother` to the base workspace, which was broken by https://github.com/DynareTeam/dynare/commit/2f717b5adc5a87f663c5c080f2963d1f65d1933eThis picks up the discussion in https://github.com/DynareTeam/dynare/pull/1372#issuecomment-271336355
I agree with @MichelJuillard that changing `M_.params` in various functions is a bad idea as it makes it hard to trace what was going on in the mod-file. At the same time, I agree with @rattoma that keeping `M_.params` unchanged when calling `evaluate_smoother` would imply a break with previous versions and therefore mean a loss in backward compatibility. I could live with that, but usually @MichelJuillard prefers to be cautious in the regard. If we want to preserve backward compatibility, we need to make sure that `shock_decomposition` returns `M_` from `evaluate_smoother` to the base workspace, which was broken by https://github.com/DynareTeam/dynare/commit/2f717b5adc5a87f663c5c080f2963d1f65d1933ehttps://git.dynare.org/Dynare/dynare/-/issues/1366Decide on how to deal with mistake in manual regarding updated variables2019-06-19T15:37:45ZJohannes Pfeifer Decide on how to deal with mistake in manual regarding updated variablesAccording to the manual, `smoother` triggers the computation of `oo_.UpdatedVariables`. But one actually needs `filtered_vars`. We can either
1. Correct the description in the manual to reflect the code behavior
2. Flag this as a bug as the code deviates from the manual and output `oo_.UpdatedVariables` when only the `smoother` option is specifiedAccording to the manual, `smoother` triggers the computation of `oo_.UpdatedVariables`. But one actually needs `filtered_vars`. We can either
1. Correct the description in the manual to reflect the code behavior
2. Flag this as a bug as the code deviates from the manual and output `oo_.UpdatedVariables` when only the `smoother` option is specifiedhttps://git.dynare.org/Dynare/dynare/-/issues/1296Decide on whether to save intermediate draws2019-06-19T15:37:47ZJohannes Pfeifer Decide on whether to save intermediate drawsWith the move to `posterior_sampling_core` we now by default save the MCMC draws every 50 draws into a temporary file, because we by default set
`posterior_sampler_options.save_tmp_file=1;`
I think this is very inefficient and therefore should be 0 by default (same behavior as before the move), with an interface provided to change the option.
@rattoma You added this behavior. Was the reason that `slice` should be treated differently?
With the move to `posterior_sampling_core` we now by default save the MCMC draws every 50 draws into a temporary file, because we by default set
`posterior_sampler_options.save_tmp_file=1;`
I think this is very inefficient and therefore should be 0 by default (same behavior as before the move), with an interface provided to change the option.
@rattoma You added this behavior. Was the reason that `slice` should be treated differently?