dynare issueshttps://git.dynare.org/Dynare/dynare/issues2019-06-19T15:37:45Zhttps://git.dynare.org/Dynare/dynare/issues/1377Decide on treatment of qz_criterium in estimation with particle filter2019-06-19T15:37:45ZJohannes Pfeifer Decide on treatment of qz_criterium in estimation with particle filterCurrently, if a unit root is present, estimation with `order=2` will result in an error. Using `diffuse_filter` will disable the check, but obviously makes no sense for particle filtering. See http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=13126Currently, if a unit root is present, estimation with `order=2` will result in an error. Using `diffuse_filter` will disable the check, but obviously makes no sense for particle filtering. See http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=131264.6https://git.dynare.org/Dynare/dynare/issues/1373Decide on whether evalute_smoother (and others) should set M_.params2019-06-19T15:37:45ZJohannes Pfeifer Decide on whether evalute_smoother (and others) should set M_.paramsThis picks up the discussion in https://github.com/DynareTeam/dynare/pull/1372#issuecomment-271336355
I agree with @MichelJuillard that changing `M_.params` in various functions is a bad idea as it makes it hard to trace what was going on in the mod-file. At the same time, I agree with @rattoma that keeping `M_.params` unchanged when calling `evaluate_smoother` would imply a break with previous versions and therefore mean a loss in backward compatibility. I could live with that, but usually @MichelJuillard prefers to be cautious in the regard. If we want to preserve backward compatibility, we need to make sure that `shock_decomposition` returns `M_` from `evaluate_smoother` to the base workspace, which was broken by https://github.com/DynareTeam/dynare/commit/2f717b5adc5a87f663c5c080f2963d1f65d1933eThis picks up the discussion in https://github.com/DynareTeam/dynare/pull/1372#issuecomment-271336355
I agree with @MichelJuillard that changing `M_.params` in various functions is a bad idea as it makes it hard to trace what was going on in the mod-file. At the same time, I agree with @rattoma that keeping `M_.params` unchanged when calling `evaluate_smoother` would imply a break with previous versions and therefore mean a loss in backward compatibility. I could live with that, but usually @MichelJuillard prefers to be cautious in the regard. If we want to preserve backward compatibility, we need to make sure that `shock_decomposition` returns `M_` from `evaluate_smoother` to the base workspace, which was broken by https://github.com/DynareTeam/dynare/commit/2f717b5adc5a87f663c5c080f2963d1f65d1933ehttps://git.dynare.org/Dynare/dynare/issues/1366Decide on how to deal with mistake in manual regarding updated variables2019-06-19T15:37:45ZJohannes Pfeifer Decide on how to deal with mistake in manual regarding updated variablesAccording to the manual, `smoother` triggers the computation of `oo_.UpdatedVariables`. But one actually needs `filtered_vars`. We can either
1. Correct the description in the manual to reflect the code behavior
2. Flag this as a bug as the code deviates from the manual and output `oo_.UpdatedVariables` when only the `smoother` option is specifiedAccording to the manual, `smoother` triggers the computation of `oo_.UpdatedVariables`. But one actually needs `filtered_vars`. We can either
1. Correct the description in the manual to reflect the code behavior
2. Flag this as a bug as the code deviates from the manual and output `oo_.UpdatedVariables` when only the `smoother` option is specifiedhttps://git.dynare.org/Dynare/dynare/issues/1332Decide on how to deal with mh_recover on Octave2019-06-19T15:37:45ZJohannes Pfeifer Decide on how to deal with mh_recover on OctaveVarious unit test fail on Octave, because the `mh_recover` option does not work properly under Octave as there are differences in setting the random number generator. We can either
- disable the check in the unit test and accept that the behavior of `mh_recover` is different under Octave and Matlab (and then document this)
- or provide an error under Octave when someone tries to use this optionVarious unit test fail on Octave, because the `mh_recover` option does not work properly under Octave as there are differences in setting the random number generator. We can either
- disable the check in the unit test and accept that the behavior of `mh_recover` is different under Octave and Matlab (and then document this)
- or provide an error under Octave when someone tries to use this option4.5https://git.dynare.org/Dynare/dynare/issues/1296Decide on whether to save intermediate draws2019-06-19T15:37:47ZJohannes Pfeifer Decide on whether to save intermediate drawsWith the move to `posterior_sampling_core` we now by default save the MCMC draws every 50 draws into a temporary file, because we by default set
`posterior_sampler_options.save_tmp_file=1;`
I think this is very inefficient and therefore should be 0 by default (same behavior as before the move), with an interface provided to change the option.
@rattoma You added this behavior. Was the reason that `slice` should be treated differently?
With the move to `posterior_sampling_core` we now by default save the MCMC draws every 50 draws into a temporary file, because we by default set
`posterior_sampler_options.save_tmp_file=1;`
I think this is very inefficient and therefore should be 0 by default (same behavior as before the move), with an interface provided to change the option.
@rattoma You added this behavior. Was the reason that `slice` should be treated differently?
https://git.dynare.org/Dynare/dynare/issues/1252do we need to remove dynamic exception specifications?2019-08-14T12:13:30ZMichelJuillarddo we need to remove dynamic exception specifications?dynamic exceptions specification is now deprecated in C++
In mexFunction, when an unknown exception occurs, Matlab or Octave may crash
Maybe we should remove them from the mexFunction code. I don't see very well their contribution.
dynamic exceptions specification is now deprecated in C++
In mexFunction, when an unknown exception occurs, Matlab or Octave may crash
Maybe we should remove them from the mexFunction code. I don't see very well their contribution.
https://git.dynare.org/Dynare/dynare/issues/1229external functions and third order perturbation2019-06-19T15:37:49ZStéphane Adjemianstepan@dynare.orgexternal functions and third order perturbationIt appears that we did not implement the possibility to use external functions when solving DSGE models at third order. I thought that when the derivates are not provided by the user, Dynare computed the derivates numerically. It seems that this mechanism is not triggered at third order. Is this intentional?
It appears that we did not implement the possibility to use external functions when solving DSGE models at third order. I thought that when the derivates are not provided by the user, Dynare computed the derivates numerically. It seems that this mechanism is not triggered at third order. Is this intentional?
https://git.dynare.org/Dynare/dynare/issues/1210Allow string arrays to be passed to options2018-11-09T14:13:22ZHoutan BastaniAllow string arrays to be passed to optionsFollowing the discussion in #1199, allow the preprocessor to accept string array values: `['a' 'b' 'c']`
Following the discussion in #1199, allow the preprocessor to accept string array values: `['a' 'b' 'c']`
Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1063Make output of dsge_var_likelihood accessible2019-06-19T15:37:54ZJohannes Pfeifer Make output of dsge_var_likelihood accessibleSee the discussions in http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=7315 and http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=2920
See the discussions in http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=7315 and http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=2920
5.0https://git.dynare.org/Dynare/dynare/issues/1024Save posterior moments even when moments are constant2019-06-19T15:37:56ZJohannes Pfeifer Save posterior moments even when moments are constantIn `covariance_mc_analysis.m` and the like we test whether the moments are constant and, if yes, we store NaN. I don't see the logic of this. All moments are still well-defined (although identical). I would propose to get rid of this check and always store the computed moments.
In `covariance_mc_analysis.m` and the like we test whether the moments are constant and, if yes, we store NaN. I don't see the logic of this. All moments are still well-defined (although identical). I would propose to get rid of this check and always store the computed moments.
https://git.dynare.org/Dynare/dynare/issues/1012Add one-sided HP filter2019-06-19T15:37:56ZJohannes Pfeifer Add one-sided HP filter#1011 already implemented the interface. Before adding the function, we need to decide whether to simply add it as a function that operates on regular double data like the `sample_hp_filter` or whether we want to make it a function on dseries objects as the `baxter_king_filter.m` of the dseries submodule.
#1011 already implemented the interface. Before adding the function, we need to decide whether to simply add it as a function that operates on regular double data like the `sample_hp_filter` or whether we want to make it a function on dseries objects as the `baxter_king_filter.m` of the dseries submodule.
https://git.dynare.org/Dynare/dynare/issues/814Decide upon what to do stale mode_compute options2019-06-19T15:38:02ZJohannes Pfeifer Decide upon what to do stale mode_compute optionsIn the process of factorizing the optimization calls (see #800), we should clean up. Currently, `mode_compute==2,101,102` are broken/not supported anymore. 2 and 102 are non-existing simulated annealings. I would propose to drop 102 and replace 2 by Matlab's
`simulannealbnd` from the Global Optimization Toolbox. That way we would have at least one simulated annealing available.
I have no clue why we have the undocumented 101 which is `solveopt` and would suggest to drop it.
In the process of factorizing the optimization calls (see #800), we should clean up. Currently, `mode_compute==2,101,102` are broken/not supported anymore. 2 and 102 are non-existing simulated annealings. I would propose to drop 102 and replace 2 by Matlab's
`simulannealbnd` from the Global Optimization Toolbox. That way we would have at least one simulated annealing available.
I have no clue why we have the undocumented 101 which is `solveopt` and would suggest to drop it.
https://git.dynare.org/Dynare/dynare/issues/670Fix handling of prefiltering and trends in non_linear_dsge_likelihood2019-06-19T15:38:08ZJohannes Pfeifer Fix handling of prefiltering and trends in non_linear_dsge_likelihood`non_linear_dsge_likelihood` uses
`Y = transpose(DynareDataset.rawdata);`
By accessing `rawdata` instead of data, prefiltering is ignored. Moreover, deterministic trends are not subtracted. I am not sure this is on purpose.
`non_linear_dsge_likelihood` uses
`Y = transpose(DynareDataset.rawdata);`
By accessing `rawdata` instead of data, prefiltering is ignored. Moreover, deterministic trends are not subtracted. I am not sure this is on purpose.
4.5https://git.dynare.org/Dynare/dynare/issues/667rename hessian.m in dyn_hessian.m2019-06-19T15:38:08ZMichelJuillardrename hessian.m in dyn_hessian.mIn order to avoid name collision with other Matlab program/toolboxe
In order to avoid name collision with other Matlab program/toolboxe
4.5https://git.dynare.org/Dynare/dynare/issues/573Fix bug in interaction of diffuse filter and stoch_simul2019-06-19T15:38:12ZJohannes Pfeifer Fix bug in interaction of diffuse filter and stoch_simulSee http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5248
See http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5248
4.5MichelJuillardMichelJuillardhttps://git.dynare.org/Dynare/dynare/issues/534Discuss allowed use of endogenous variables outside of model-block2019-06-19T15:38:14ZJohannes Pfeifer Discuss allowed use of endogenous variables outside of model-blockConsider the mod-file
```
var y, c, k, a, h, b;
varexo e, u;
parameters beta, rho, alpha, delta, theta, psi, tau test;
alpha = 0.36;
rho = 0.95;
tau = 0.025;
beta = 0.99;
delta = 0.025;
psi = 0;
theta = 2.95;
phi = 0.1;
test=y*beta;
model;
c*theta*h^(1+psi)=(1-alpha)*y;
k = beta*(((exp(b)*c)/(exp(b(+1))*c(+1)))
*(exp(b(+1))*alpha*y(+1)+(1-delta)*k));
y = exp(a)*(k(-1)^alpha)*(h^(1-alpha));
k = exp(b)*(y-c)+(1-delta)*k(-1);
a = rho*a(-1)+tau*b(-1) + e;
b = tau*a(-1)+rho*b(-1) + u;
end;
initval;
y = 1.08068253095672;
c = 0.80359242014163;
h = 0.29175631001732;
k = 11.08360443260358;
a = 0;
b = 0;
e = 0;
u = 0;
end;
shocks;
var e; stderr 0.009;
var u; stderr 0.009;
var e, u = phi*0.009*0.009;
end;
stoch_simul;
```
The definition
`test=y*beta;`
results in the preprocessor plugging in for the endogenous variable `y` with its yet uncomputed steady state, i.e. 0. Users thus can create a circular problem with the steady state of `y` depending on `test` with `test` depending on `y` - and would never notice the problem, because the definition does not result in an error. Would it be possible to block this behavior? If we want to allow users to access the steady state value of endogenous variables outside of the model-block, it should be through the `steady_state`operator.
Or do I miss something that makes this behavior desirable?
Consider the mod-file
```
var y, c, k, a, h, b;
varexo e, u;
parameters beta, rho, alpha, delta, theta, psi, tau test;
alpha = 0.36;
rho = 0.95;
tau = 0.025;
beta = 0.99;
delta = 0.025;
psi = 0;
theta = 2.95;
phi = 0.1;
test=y*beta;
model;
c*theta*h^(1+psi)=(1-alpha)*y;
k = beta*(((exp(b)*c)/(exp(b(+1))*c(+1)))
*(exp(b(+1))*alpha*y(+1)+(1-delta)*k));
y = exp(a)*(k(-1)^alpha)*(h^(1-alpha));
k = exp(b)*(y-c)+(1-delta)*k(-1);
a = rho*a(-1)+tau*b(-1) + e;
b = tau*a(-1)+rho*b(-1) + u;
end;
initval;
y = 1.08068253095672;
c = 0.80359242014163;
h = 0.29175631001732;
k = 11.08360443260358;
a = 0;
b = 0;
e = 0;
u = 0;
end;
shocks;
var e; stderr 0.009;
var u; stderr 0.009;
var e, u = phi*0.009*0.009;
end;
stoch_simul;
```
The definition
`test=y*beta;`
results in the preprocessor plugging in for the endogenous variable `y` with its yet uncomputed steady state, i.e. 0. Users thus can create a circular problem with the steady state of `y` depending on `test` with `test` depending on `y` - and would never notice the problem, because the definition does not result in an error. Would it be possible to block this behavior? If we want to allow users to access the steady state value of endogenous variables outside of the model-block, it should be through the `steady_state`operator.
Or do I miss something that makes this behavior desirable?
4.5https://git.dynare.org/Dynare/dynare/issues/525Check prior truncation2019-06-19T15:38:14ZJohannes Pfeifer Check prior truncationThere are two issues:
a) There are three cases where priors are/should be truncated in Dynare
1. If the user explicitly specifies this
2. If the prior for a correlation has mass outside [-1,1]
3. If prior for standard deviations has mass below 0.
#522 suggests to issue a warning in cases 2 and 3. However, the point where truncation is problematic is computing marginal data densities. I suggest to set a flag for all three cases and issue a warning in marginal_density if the prior was truncated.
b) I am not sure I understand the prior truncation in case 1 above. The manual says that the prior does not integrate to 1 anymore. This is fine. But in `set_prior.m` we have code like
```
k = find(bayestopt_.pshape == 4);
k1 = find(isnan(bayestopt_.p3(k)));
k2 = find(isnan(bayestopt_.p4(k)));
bayestopt_.p3(k(k1)) = zeros(length(k1),1);
bayestopt_.p4(k(k2)) = Inf(length(k2),1);
for i=1:length(k)
[bayestopt_.p6(k(i)),bayestopt_.p7(k(i))] = ...
inverse_gamma_specification(bayestopt_.p1(k(i))-bayestopt_.p3(k(i)),bayestopt_.p2(k(i)),1,0) ;
bayestopt_.p5(k(i)) = compute_prior_mode([ bayestopt_.p6(k(i)) , bayestopt_.p7(k(i)) , bayestopt_.p3(k(i)) ], 4) ;
end
```
The way I read this, the mean of the prior seems to take the truncation in p3 into account. Thus, instead of fixing the prior distribution according to mean and variance and then truncating it, it seems we are partially setting the mean of the truncated distribution. I am not sure this behavior is desired/expected by the user. We at least need to document what Dynare is doing here.
There are two issues:
a) There are three cases where priors are/should be truncated in Dynare
1. If the user explicitly specifies this
2. If the prior for a correlation has mass outside [-1,1]
3. If prior for standard deviations has mass below 0.
#522 suggests to issue a warning in cases 2 and 3. However, the point where truncation is problematic is computing marginal data densities. I suggest to set a flag for all three cases and issue a warning in marginal_density if the prior was truncated.
b) I am not sure I understand the prior truncation in case 1 above. The manual says that the prior does not integrate to 1 anymore. This is fine. But in `set_prior.m` we have code like
```
k = find(bayestopt_.pshape == 4);
k1 = find(isnan(bayestopt_.p3(k)));
k2 = find(isnan(bayestopt_.p4(k)));
bayestopt_.p3(k(k1)) = zeros(length(k1),1);
bayestopt_.p4(k(k2)) = Inf(length(k2),1);
for i=1:length(k)
[bayestopt_.p6(k(i)),bayestopt_.p7(k(i))] = ...
inverse_gamma_specification(bayestopt_.p1(k(i))-bayestopt_.p3(k(i)),bayestopt_.p2(k(i)),1,0) ;
bayestopt_.p5(k(i)) = compute_prior_mode([ bayestopt_.p6(k(i)) , bayestopt_.p7(k(i)) , bayestopt_.p3(k(i)) ], 4) ;
end
```
The way I read this, the mean of the prior seems to take the truncation in p3 into account. Thus, instead of fixing the prior distribution according to mean and variance and then truncating it, it seems we are partially setting the mean of the truncated distribution. I am not sure this behavior is desired/expected by the user. We at least need to document what Dynare is doing here.
https://git.dynare.org/Dynare/dynare/issues/504Discuss and document the load_mh_file option2019-06-19T15:38:16ZJohannes Pfeifer Discuss and document the load_mh_file optionThe current behavior of the `load_mh_file`-option with it recomputing the mode, the Hessian, and the scale-factor by default seems counterintuitive. Given the fixed seed, one should get the same results, but there is the risk of them changing between the reloaded chain and the new elements to be added and thus having a chain with difffering proposal densities. We should at least document this behavior and potentially reload the mode-file by default. See also http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5051
The current behavior of the `load_mh_file`-option with it recomputing the mode, the Hessian, and the scale-factor by default seems counterintuitive. Given the fixed seed, one should get the same results, but there is the risk of them changing between the reloaded chain and the new elements to be added and thus having a chain with difffering proposal densities. We should at least document this behavior and potentially reload the mode-file by default. See also http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=5051
4.5https://git.dynare.org/Dynare/dynare/issues/406Agree on system for storing results and figures2019-06-19T15:38:18ZJohannes Pfeifer Agree on system for storing results and figuresCurrently we don't have a consistent system for saving results and figures. The MS-BVAR code for example saves the computation matrices in folders like IRF, Forecast, and Variance_Decomposition. But the corresponding graphs are saved in correspondingly named subfolders of the Output-Folder. This creates a confusing double structure. I am pretty sure there was some method to this separation, but I am unable to see it.
This treatment for example is different from other parts of the code, where results are either directly stored to "Output" or special folders like "gsa" are created at the same level as the Output folder.
In the longrun, we might want to agree on a consistent system.
Currently we don't have a consistent system for saving results and figures. The MS-BVAR code for example saves the computation matrices in folders like IRF, Forecast, and Variance_Decomposition. But the corresponding graphs are saved in correspondingly named subfolders of the Output-Folder. This creates a confusing double structure. I am pretty sure there was some method to this separation, but I am unable to see it.
This treatment for example is different from other parts of the code, where results are either directly stored to "Output" or special folders like "gsa" are created at the same level as the Output folder.
In the longrun, we might want to agree on a consistent system.
https://git.dynare.org/Dynare/dynare/issues/370Add (preprocessor) option for Fernandez-Villaverde et al (2012) type of IRFs ...2013-05-11T11:44:55ZJohannes Pfeifer Add (preprocessor) option for Fernandez-Villaverde et al (2012) type of IRFs in stoch_simulA frequent question is how to generate IRFs at order=3 that look like the ones in Fernandez-Villaverde et al (2012) "Risk matters". I will add a corresponding code over the next two weeks. But a preprocessor option would still be needed. There are two issues that need to be discussed.
1. What should be the naming of the option? I would suggest something like `ergodic_mean_irf` as the IRFs are computed relative to the ergodic mean.
2. Should we allow for flexibility in the number of periods over which to compute the ergodic mean? If yes, we would need another option `ergodic_mean_periods`
A frequent question is how to generate IRFs at order=3 that look like the ones in Fernandez-Villaverde et al (2012) "Risk matters". I will add a corresponding code over the next two weeks. But a preprocessor option would still be needed. There are two issues that need to be discussed.
1. What should be the naming of the option? I would suggest something like `ergodic_mean_irf` as the IRFs are computed relative to the ergodic mean.
2. Should we allow for flexibility in the number of periods over which to compute the ergodic mean? If yes, we would need another option `ergodic_mean_periods`