dynare issueshttps://git.dynare.org/Dynare/dynare/issues2019-06-19T15:37:45Zhttps://git.dynare.org/Dynare/dynare/issues/1356introduce posterior_nograph option2019-06-19T15:37:45ZMarco Rattointroduce posterior_nograph optionI think it would be useful to have an option like:
`options_.posterior_nograph`
with default value at 0 like standard nograph.
This option would allow performing all possible posterior subdraws for irfs, smoothed variables etc, but avoiding to produce thousands of useless graphs that are currently done [for large models at least].
I think it would also be useful to make a distinction w.r.t. `options_.nograph` since the latter would not trigger prior/posterior plots of parameter estimates which would be required in any case.
And by the way, `pm3` and `posterior_irf` do not seem to honor the nograph option: plots are made in any case. So at least we should make sure `nograph` would work properly?
If you agree, I would make the appropriate changes to the matlab `pm3` and `posterior_irf` routines with `posterior_nograph` [the preprocessor could be updated with this extra option, then].
I think it would be useful to have an option like:
`options_.posterior_nograph`
with default value at 0 like standard nograph.
This option would allow performing all possible posterior subdraws for irfs, smoothed variables etc, but avoiding to produce thousands of useless graphs that are currently done [for large models at least].
I think it would also be useful to make a distinction w.r.t. `options_.nograph` since the latter would not trigger prior/posterior plots of parameter estimates which would be required in any case.
And by the way, `pm3` and `posterior_irf` do not seem to honor the nograph option: plots are made in any case. So at least we should make sure `nograph` would work properly?
If you agree, I would make the appropriate changes to the matlab `pm3` and `posterior_irf` routines with `posterior_nograph` [the preprocessor could be updated with this extra option, then].
4.5https://git.dynare.org/Dynare/dynare/issues/1355Allow adding auxiliary variables like Ramsey multipliers to var_list_2019-06-19T15:37:45ZJohannes Pfeifer Allow adding auxiliary variables like Ramsey multipliers to var_list_The auxiliary variables are endogenous variables like every other variable. A call like
`ramsey_policy(instruments=(i),irf=13,planner_discount=betta,periods=200) x pi MULT_1;`
would be suficient to display IRFs for the multiplier 1. However, the preprocessor does not allow adding `MULT_1` to the variable list, because:
`Unknown symbol: MULT_1`
We should allow adding any variable present in `M_.endo_names` to the `var_list_`. @houtanb Could you do this, please?
Related to http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=12117The auxiliary variables are endogenous variables like every other variable. A call like
`ramsey_policy(instruments=(i),irf=13,planner_discount=betta,periods=200) x pi MULT_1;`
would be suficient to display IRFs for the multiplier 1. However, the preprocessor does not allow adding `MULT_1` to the variable list, because:
`Unknown symbol: MULT_1`
We should allow adding any variable present in `M_.endo_names` to the `var_list_`. @houtanb Could you do this, please?
Related to http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=121174.5Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1349set number of threads per job in parallel execution2019-06-19T15:37:45ZMarco Rattoset number of threads per job in parallel executionGiven the large number of CPU's being made available on desktop pc's, the current node option `singleCompThread` looks rather limited.
I would propose the following changes to the parallel configuration:
- set the default for `singleCompThread` to 0;
- add a node option in dynare.ini `numberOfThreadsPerJob`, which allows to set the number of cpu's assigned to each job. So, for example, for a node with 8 CPU's and `numberOfThreadsPerJob=2`, the parallel engine will split the work into 4 parallel instances, each using 2 cpu's.
would this be possible?Given the large number of CPU's being made available on desktop pc's, the current node option `singleCompThread` looks rather limited.
I would propose the following changes to the parallel configuration:
- set the default for `singleCompThread` to 0;
- add a node option in dynare.ini `numberOfThreadsPerJob`, which allows to set the number of cpu's assigned to each job. So, for example, for a node with 8 CPU's and `numberOfThreadsPerJob=2`, the parallel engine will split the work into 4 parallel instances, each using 2 cpu's.
would this be possible?4.5Marco RattoMarco Rattohttps://git.dynare.org/Dynare/dynare/issues/1350Fix shock_decomposition when selected_variables_only is specified2019-06-19T15:37:45ZJohannes Pfeifer Fix shock_decomposition when selected_variables_only is specifiedThe option `selected_variables_only` is currently incompatible with `shock_decomposition ` as can be seen in the mod-file
```
var y ${y}$ (long_name='output')
y_nat ${y^{nat}}$ (long_name='natural output')
y_gap ${\tilde y}$ (long_name='output gap')
pi ${\pi}$ (long_name='inflation')
c ${c}$ (long_name='consumption')
lam ${\lambda}$ (long_name='Lagrange multiplier')
n ${n}$ (long_name='hours worked')
w ${w}$ (long_name='real wage')
r ${r}$ (long_name='nominal interest rate')
mc ${\tau}$ (long_name='Marginal cost')
e ${e}$ (long_name='demand elasticty')
x ${x}$ (long_name='Preference shock')
a ${a}$ (long_name='AR(1) technology shock process')
nu ${\nu}$ (long_name='monetary shock process')
y_fd ${\varepsilon_e}$ (long_name='markup shock')
w_fd ${\varepsilon_e}$ (long_name='markup shock')
r_obs ${\varepsilon_e}$ (long_name='markup shock')
pi_obs ${\varepsilon_e}$ (long_name='markup shock')
;
varexo eps_a ${\varepsilon_a}$ (long_name='technology shock')
eps_nu ${\varepsilon_\nu}$ (long_name='monetary policy shock')
eps_x ${\varepsilon_x}$ (long_name='preference shock')
eps_e ${\varepsilon_e}$ (long_name='markup shock')
;
parameters
alppha ${\alpha}$ (long_name='capital share')
betta ${\beta}$ (long_name='discount factor')
h ${h}$ (long_name='Parameter habit consumption')
phi ${\phi}$ (long_name='unitary Frisch elasticity')
chix ${\chi}$ (long_name='price indexation')
rho_pi ${\phi_{\pi}}$ (long_name='inflation feedback Taylor Rule')
rho_y ${\phi_{y}}$ (long_name='output feedback Taylor Rule')
rho_r ${\phi_{y}}$ (long_name='degree of smoothing Taylor rule')
epsilon ${\epsilon}$ (long_name='steady state demand elasticity')
theta ${\theta}$ (long_name='Calvo parameter')
rho_a ${\rho_{x}}$ (long_name='autocorrelation technology shock')
rho_x ${\rho_{x}}$ (long_name='autocorrelation preference shock')
rho_nu ${\rho_{x}}$ (long_name='autocorrelation monetary shock')
;
%---------------------------------------------------------------
% Parametrization, p. 52
%---------------------------------------------------------------
alppha = 0.3;
betta = 0.99;
h = 0.7;
phi = 1;
chix = 0.6;
rho_pi = 1.5;
rho_y = 0.2;
rho_r = 0.8;
epsilon = 6;
theta = 0.6;
rho_a = 0.8;
rho_x = 0.5;
rho_nu = 0.4;
%---------------------------------------------------------------
% First Order Conditions
%---------------------------------------------------------------
model(linear);
//1. F.O.C. consumption for Households
lam*(1-(h*betta)) = x-((c-(h*c(-1)))*(1/(1-h)))-(h*betta*x(+1))+((c(+1)-(h*c))*((h*betta)/(1-h)));
//2. F.O.C. leisure for Households
phi*n=lam+w;
//3. F.O.C. bonds for Households
0=lam(+1)-lam+r-pi(+1);
//4. Average marginal cost
mc=w+(y*(alppha/(1-alppha)))-(a*(1/(1-alppha)));
//5. First auxiliary variable
pi=((betta/(1+(betta*chix)))*pi(+1))+((chix/(1+(betta*chix)))*pi(-1))+((mc-(e*(1/(epsilon-1))))*(((1-theta*betta)*(1-theta)*(1-alppha))/((1+betta*chix)*(1-alppha+alppha*epsilon)*theta)));
//8. Natural output
alppha*y_nat=a-((1-alppha)*w)+(((1-alppha)/(epsilon-1))*e);
//9. Taylor Rule
r=(r(-1)*rho_r)+((1-rho_r)*((rho_pi*pi)+(rho_y*y_gap)))+nu;
//10. Definition output gap
y_gap = y-y_nat;
//12. Equilibrium
y=c;
//13. Production function
y=(n*(1-alppha))+a;
//14. TFP shock
a=(rho_a*a(-1))+eps_a;
//15. Preference shock
x=(rho_x*x(-1))+eps_x;
//16. Markup shock
e=((1-epsilon)/epsilon)*eps_e;
//17. Monetary shock
nu=(rho_nu*nu(-1))+eps_nu;
//// Observation equations
//18. Output
y_fd = y;
//19. Wage
w_fd = w;
//20. Interes Rate
r_obs=r;
//21. Inflation
pi_obs=pi;
end;
shocks;
var eps_a= 0.02^2 ;
var eps_nu= 0.04^2;
var eps_x= 0.02^2;
var eps_e= 0.2^2;
end;
stoch_simul(order=1,periods=200,irf=0) y_fd w_fd r_obs pi_obs;
datatomfile('observables_gali2_filt_119',char('y_fd','w_fd','r_obs','pi_obs'));
varobs y_fd w_fd r_obs pi_obs;
estimated_params;
alppha, 0.3, 0, 1,beta_pdf, 0.3, 0.05;
h, 0.7, 0, 1,beta_pdf, 0.33, 0.15;
phi, 1, , ,gamma_pdf, 1.17, 0.351;
chix, 0.6, 0, 1, beta_pdf, 0.61, 0.112;
rho_pi, 1.5, 0, 10, normal_pdf, 1.5, 0.2;
rho_y, 0.2, 0, 10, normal_pdf, 0.2, 0.1;
theta, 0.6, 0, 1, beta_pdf, 0.61, 0.112;
rho_a, 0.7, 0, 1, beta_pdf, 0.61, 0.112;
rho_x, 0.5, 0, 1, beta_pdf, 0.61, 0.112;
rho_nu, 0.4, 0, 1, beta_pdf, 0.61, 0.112;
stderr eps_a, inv_gamma_pdf, 0.1, 2;
stderr eps_nu, inv_gamma_pdf, 0.1, 2;
stderr eps_x, inv_gamma_pdf, 0.1, 2;
stderr eps_e, inv_gamma_pdf, 0.1, 2;
end;
estimated_params_init;
alppha, 0.3;
h, 0.7;
phi, 1;
chix, 0.6;
rho_pi, 1.5;
rho_y, 0.2;
theta, 0.6;
rho_a, 0.7;
rho_x, 0.5;
rho_nu, 0.4;
stderr eps_a, 0.02;
stderr eps_nu, 0.04;
stderr eps_x, 0.02;
stderr eps_e, 0.2;
end;
%identification;
estimation(datafile=observables_gali2_filt_119,mh_replic=2000,mh_nblocks=1, mode_compute=4) y_fd w_fd y;
save temp;
// options_.selected_variables_only=0;
shock_decomposition(parameter_set=posterior_mean) y y_fd;
```
The best solution seems to be making the `options_` structure local to `evaluate_smoother` (related to #1197 ) and then set `options_.selected_variables_only=0` in `shock_decomposition.m`The option `selected_variables_only` is currently incompatible with `shock_decomposition ` as can be seen in the mod-file
```
var y ${y}$ (long_name='output')
y_nat ${y^{nat}}$ (long_name='natural output')
y_gap ${\tilde y}$ (long_name='output gap')
pi ${\pi}$ (long_name='inflation')
c ${c}$ (long_name='consumption')
lam ${\lambda}$ (long_name='Lagrange multiplier')
n ${n}$ (long_name='hours worked')
w ${w}$ (long_name='real wage')
r ${r}$ (long_name='nominal interest rate')
mc ${\tau}$ (long_name='Marginal cost')
e ${e}$ (long_name='demand elasticty')
x ${x}$ (long_name='Preference shock')
a ${a}$ (long_name='AR(1) technology shock process')
nu ${\nu}$ (long_name='monetary shock process')
y_fd ${\varepsilon_e}$ (long_name='markup shock')
w_fd ${\varepsilon_e}$ (long_name='markup shock')
r_obs ${\varepsilon_e}$ (long_name='markup shock')
pi_obs ${\varepsilon_e}$ (long_name='markup shock')
;
varexo eps_a ${\varepsilon_a}$ (long_name='technology shock')
eps_nu ${\varepsilon_\nu}$ (long_name='monetary policy shock')
eps_x ${\varepsilon_x}$ (long_name='preference shock')
eps_e ${\varepsilon_e}$ (long_name='markup shock')
;
parameters
alppha ${\alpha}$ (long_name='capital share')
betta ${\beta}$ (long_name='discount factor')
h ${h}$ (long_name='Parameter habit consumption')
phi ${\phi}$ (long_name='unitary Frisch elasticity')
chix ${\chi}$ (long_name='price indexation')
rho_pi ${\phi_{\pi}}$ (long_name='inflation feedback Taylor Rule')
rho_y ${\phi_{y}}$ (long_name='output feedback Taylor Rule')
rho_r ${\phi_{y}}$ (long_name='degree of smoothing Taylor rule')
epsilon ${\epsilon}$ (long_name='steady state demand elasticity')
theta ${\theta}$ (long_name='Calvo parameter')
rho_a ${\rho_{x}}$ (long_name='autocorrelation technology shock')
rho_x ${\rho_{x}}$ (long_name='autocorrelation preference shock')
rho_nu ${\rho_{x}}$ (long_name='autocorrelation monetary shock')
;
%---------------------------------------------------------------
% Parametrization, p. 52
%---------------------------------------------------------------
alppha = 0.3;
betta = 0.99;
h = 0.7;
phi = 1;
chix = 0.6;
rho_pi = 1.5;
rho_y = 0.2;
rho_r = 0.8;
epsilon = 6;
theta = 0.6;
rho_a = 0.8;
rho_x = 0.5;
rho_nu = 0.4;
%---------------------------------------------------------------
% First Order Conditions
%---------------------------------------------------------------
model(linear);
//1. F.O.C. consumption for Households
lam*(1-(h*betta)) = x-((c-(h*c(-1)))*(1/(1-h)))-(h*betta*x(+1))+((c(+1)-(h*c))*((h*betta)/(1-h)));
//2. F.O.C. leisure for Households
phi*n=lam+w;
//3. F.O.C. bonds for Households
0=lam(+1)-lam+r-pi(+1);
//4. Average marginal cost
mc=w+(y*(alppha/(1-alppha)))-(a*(1/(1-alppha)));
//5. First auxiliary variable
pi=((betta/(1+(betta*chix)))*pi(+1))+((chix/(1+(betta*chix)))*pi(-1))+((mc-(e*(1/(epsilon-1))))*(((1-theta*betta)*(1-theta)*(1-alppha))/((1+betta*chix)*(1-alppha+alppha*epsilon)*theta)));
//8. Natural output
alppha*y_nat=a-((1-alppha)*w)+(((1-alppha)/(epsilon-1))*e);
//9. Taylor Rule
r=(r(-1)*rho_r)+((1-rho_r)*((rho_pi*pi)+(rho_y*y_gap)))+nu;
//10. Definition output gap
y_gap = y-y_nat;
//12. Equilibrium
y=c;
//13. Production function
y=(n*(1-alppha))+a;
//14. TFP shock
a=(rho_a*a(-1))+eps_a;
//15. Preference shock
x=(rho_x*x(-1))+eps_x;
//16. Markup shock
e=((1-epsilon)/epsilon)*eps_e;
//17. Monetary shock
nu=(rho_nu*nu(-1))+eps_nu;
//// Observation equations
//18. Output
y_fd = y;
//19. Wage
w_fd = w;
//20. Interes Rate
r_obs=r;
//21. Inflation
pi_obs=pi;
end;
shocks;
var eps_a= 0.02^2 ;
var eps_nu= 0.04^2;
var eps_x= 0.02^2;
var eps_e= 0.2^2;
end;
stoch_simul(order=1,periods=200,irf=0) y_fd w_fd r_obs pi_obs;
datatomfile('observables_gali2_filt_119',char('y_fd','w_fd','r_obs','pi_obs'));
varobs y_fd w_fd r_obs pi_obs;
estimated_params;
alppha, 0.3, 0, 1,beta_pdf, 0.3, 0.05;
h, 0.7, 0, 1,beta_pdf, 0.33, 0.15;
phi, 1, , ,gamma_pdf, 1.17, 0.351;
chix, 0.6, 0, 1, beta_pdf, 0.61, 0.112;
rho_pi, 1.5, 0, 10, normal_pdf, 1.5, 0.2;
rho_y, 0.2, 0, 10, normal_pdf, 0.2, 0.1;
theta, 0.6, 0, 1, beta_pdf, 0.61, 0.112;
rho_a, 0.7, 0, 1, beta_pdf, 0.61, 0.112;
rho_x, 0.5, 0, 1, beta_pdf, 0.61, 0.112;
rho_nu, 0.4, 0, 1, beta_pdf, 0.61, 0.112;
stderr eps_a, inv_gamma_pdf, 0.1, 2;
stderr eps_nu, inv_gamma_pdf, 0.1, 2;
stderr eps_x, inv_gamma_pdf, 0.1, 2;
stderr eps_e, inv_gamma_pdf, 0.1, 2;
end;
estimated_params_init;
alppha, 0.3;
h, 0.7;
phi, 1;
chix, 0.6;
rho_pi, 1.5;
rho_y, 0.2;
theta, 0.6;
rho_a, 0.7;
rho_x, 0.5;
rho_nu, 0.4;
stderr eps_a, 0.02;
stderr eps_nu, 0.04;
stderr eps_x, 0.02;
stderr eps_e, 0.2;
end;
%identification;
estimation(datafile=observables_gali2_filt_119,mh_replic=2000,mh_nblocks=1, mode_compute=4) y_fd w_fd y;
save temp;
// options_.selected_variables_only=0;
shock_decomposition(parameter_set=posterior_mean) y y_fd;
```
The best solution seems to be making the `options_` structure local to `evaluate_smoother` (related to #1197 ) and then set `options_.selected_variables_only=0` in `shock_decomposition.m`https://git.dynare.org/Dynare/dynare/issues/1344Investigate whether recent mex-changes broke backward compatibility with olde...2019-06-19T15:37:45ZJohannes Pfeifer Investigate whether recent mex-changes broke backward compatibility with older Matlab versionsGiovanni Lombardo reports a compilation error under Ubuntu with Matlab 2010a:
```
error: unknown type name ‘char16_t’
typedef char16_t mxChar;
```
(see http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=11556).
The post at http://stackoverflow.com/questions/22367516/mex-compile-error-unknown-type-name-char16-t
suggests that one might need to add
`typedef uint16_t char16_t;`
before
`#include "mex.h"`
Giovanni Lombardo reports a compilation error under Ubuntu with Matlab 2010a:
```
error: unknown type name ‘char16_t’
typedef char16_t mxChar;
```
(see http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=11556).
The post at http://stackoverflow.com/questions/22367516/mex-compile-error-unknown-type-name-char16-t
suggests that one might need to add
`typedef uint16_t char16_t;`
before
`#include "mex.h"`
https://git.dynare.org/Dynare/dynare/issues/1339bug in missing_DiffuseKalmanSmootherH3_Z.m?2019-06-19T15:37:45ZMichelJuillardbug in missing_DiffuseKalmanSmootherH3_Z.m?@JohannesPfeifer I don't see how line 302
```
Linf = eye(mm) - Kinf(:,i,t)'/Finf(i,t);
```
can be correct. The term before - is a square matrix and the term after, a row vector@JohannesPfeifer I don't see how line 302
```
Linf = eye(mm) - Kinf(:,i,t)'/Finf(i,t);
```
can be correct. The term before - is a square matrix and the term after, a row vector4.5Johannes Pfeifer Johannes Pfeifer https://git.dynare.org/Dynare/dynare/issues/1337A linear model with options_.risky_steadystate = 1; and k_order_solver causes...2019-06-19T15:37:45ZHoutan BastaniA linear model with options_.risky_steadystate = 1; and k_order_solver causes a crashShould stop with error if a linear model is used.
See the mod file: https://gist.github.com/houtanb/d2eaa90121e26bdace138bf477c631eeShould stop with error if a linear model is used.
See the mod file: https://gist.github.com/houtanb/d2eaa90121e26bdace138bf477c631ee4.5Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1335add field to M_ when 2nd derivative is equal to zero2019-06-19T15:37:45ZHoutan Bastaniadd field to M_ when 2nd derivative is equal to zero4.5Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1336make stoch_simul more efficient: solve at `order=1` when `M_.hessian_eq_zero==1`2019-06-19T15:37:45ZHoutan Bastanimake stoch_simul more efficient: solve at `order=1` when `M_.hessian_eq_zero==1`To avoid the error in `matlab/stochastic_solvers.m` line 64, when `M_.hessian_eq_zero` is true solve at `order = 1` instead of `2` or `3`.
Work to be done on a branchTo avoid the error in `matlab/stochastic_solvers.m` line 64, when `M_.hessian_eq_zero` is true solve at `order = 1` instead of `2` or `3`.
Work to be done on a branch4.5Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1332Decide on how to deal with mh_recover on Octave2019-06-19T15:37:45ZJohannes Pfeifer Decide on how to deal with mh_recover on OctaveVarious unit test fail on Octave, because the `mh_recover` option does not work properly under Octave as there are differences in setting the random number generator. We can either
- disable the check in the unit test and accept that the behavior of `mh_recover` is different under Octave and Matlab (and then document this)
- or provide an error under Octave when someone tries to use this optionVarious unit test fail on Octave, because the `mh_recover` option does not work properly under Octave as there are differences in setting the random number generator. We can either
- disable the check in the unit test and accept that the behavior of `mh_recover` is different under Octave and Matlab (and then document this)
- or provide an error under Octave when someone tries to use this option4.5https://git.dynare.org/Dynare/dynare/issues/1328Find out why pr#1323 fails with older versions of matlab2019-06-19T15:37:45ZStéphane Adjemianstepan@dynare.orgFind out why pr#1323 fails with older versions of matlabSee the mod file [here](https://gist.github.com/stepan-a/260eabd535efa7c38d1197cad04a581a) and discussion [here](https://github.com/DynareTeam/dynare/pull/1323).See the mod file [here](https://gist.github.com/stepan-a/260eabd535efa7c38d1197cad04a581a) and discussion [here](https://github.com/DynareTeam/dynare/pull/1323).4.5Stéphane Adjemianstepan@dynare.orgStéphane Adjemianstepan@dynare.orghttps://git.dynare.org/Dynare/dynare/issues/1318Improve documentation of endogenous prior restrictions2019-06-19T15:37:47ZJohannes Pfeifer Improve documentation of endogenous prior restrictionsSee http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=10433
See http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=10433
https://git.dynare.org/Dynare/dynare/issues/1315check to see if we need all Windows compiler macros in preprocessor2018-10-02T14:53:27ZHoutan Bastanicheck to see if we need all Windows compiler macros in preprocessorThis page
http://nadeausoftware.com/articles/2012/01/c_c_tip_how_use_compiler_predefined_macros_detect_operating_system
Seems to say all we need is `_WIN32` as it's defined on all Windows systems. This would make `__CYGWIN32__` and `__MINGW32__` redundant. Check to see this is the case
This page
http://nadeausoftware.com/articles/2012/01/c_c_tip_how_use_compiler_predefined_macros_detect_operating_system
Seems to say all we need is `_WIN32` as it's defined on all Windows systems. This would make `__CYGWIN32__` and `__MINGW32__` redundant. Check to see this is the case
4.6Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1314octave does not need the compiler argument when using use_dll2019-06-19T15:37:47ZHoutan Bastanioctave does not need the compiler argument when using use_dllThe preprocessor requires Windows users to pass the name of the compiler on their system when using `use_dll`. This is not necessary on Octave, as evidenced by `matlab/utilities/general/dyn_mex.m`. Fix `preprocessor/ModFile.cc` appropriately.
See discussion here: https://github.com/DynareTeam/dynare/commit/accd70a4c79c3e8dffb8ab367e1879680bd20606#commitcomment-19427662
The preprocessor requires Windows users to pass the name of the compiler on their system when using `use_dll`. This is not necessary on Octave, as evidenced by `matlab/utilities/general/dyn_mex.m`. Fix `preprocessor/ModFile.cc` appropriately.
See discussion here: https://github.com/DynareTeam/dynare/commit/accd70a4c79c3e8dffb8ab367e1879680bd20606#commitcomment-19427662
4.5Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1312Investigate problem with schur_state_space_transformation2019-06-19T15:37:47ZJohannes Pfeifer Investigate problem with schur_state_space_transformationThere is a problem with the initialization of the diffuse filter and smoother via `schur_statespace_transformation.m`. The mod-file `Local_trend_3.mod` (sent via email on 11/10/16) by a user contains three independent local trend models as in Durbin/Koopman (2012). Because the three processes are independent, all Kalman filter matrices should be block recursive. When using the Harvey initialization of the covariance `Pinf`, this is exactly what happens and results make sense. But when using the diffuse filter, in `dsge_likelihood.m` after
```
[Ztmp,Ttmp,Rtmp,QT,Pstar,Pinf]=schur_statespace_transformation(Z,T,R,Q,DynareOptions.qz_criterium,[1:length(T)]);
Pinf = QT*Pinf*QT';
Pstar = QT*Pstar*QT';
```
`Pinf` is not block diagonal anymore. Rather, there significant off-diagonal entries. This introduces a correlation between the independent local trend models that is very visible in the last graph. All observed trending variables are perfectly matched by the smoother, but the different trends interact in a way that is incompatible with the initial structure. Thus, there seems to be a problem with `schur_statespace_transformation.m`. Either there is a bug or there are numerical problems.
Weirdly, the same problem is not present when I run just two local trend models (`Local_trend_2.mod`). Here, `Pinf` is again block-recursive. So something happens when going from two to three local trend models.
There is a problem with the initialization of the diffuse filter and smoother via `schur_statespace_transformation.m`. The mod-file `Local_trend_3.mod` (sent via email on 11/10/16) by a user contains three independent local trend models as in Durbin/Koopman (2012). Because the three processes are independent, all Kalman filter matrices should be block recursive. When using the Harvey initialization of the covariance `Pinf`, this is exactly what happens and results make sense. But when using the diffuse filter, in `dsge_likelihood.m` after
```
[Ztmp,Ttmp,Rtmp,QT,Pstar,Pinf]=schur_statespace_transformation(Z,T,R,Q,DynareOptions.qz_criterium,[1:length(T)]);
Pinf = QT*Pinf*QT';
Pstar = QT*Pstar*QT';
```
`Pinf` is not block diagonal anymore. Rather, there significant off-diagonal entries. This introduces a correlation between the independent local trend models that is very visible in the last graph. All observed trending variables are perfectly matched by the smoother, but the different trends interact in a way that is incompatible with the initial structure. Thus, there seems to be a problem with `schur_statespace_transformation.m`. Either there is a bug or there are numerical problems.
Weirdly, the same problem is not present when I run just two local trend models (`Local_trend_2.mod`). Here, `Pinf` is again block-recursive. So something happens when going from two to three local trend models.
MichelJuillardMichelJuillardhttps://git.dynare.org/Dynare/dynare/issues/1309improve documentation of datafile option2019-06-19T15:37:47ZMichelJuillardimprove documentation of datafile optionIn the manual add that if several files differ only by the extension, the filaname must include the extendsion and written between quotes
In the manual add that if several files differ only by the extension, the filaname must include the extendsion and written between quotes
4.5https://git.dynare.org/Dynare/dynare/issues/1307Provide 32 and 64 bit Octave mex-files2019-06-19T15:37:47ZJohannes Pfeifer Provide 32 and 64 bit Octave mex-filesAs indicated at http://savannah.gnu.org/bugs/index.php?49289, there will be a 64 bit version of Octave 4.2, which then also will allow to use the 64bit Dynare preprocessor (see #1306). This will hopefully also solve #1304. But this means we need to provide 64bit mex-files as well as the 32bit ones do not run.
As indicated at http://savannah.gnu.org/bugs/index.php?49289, there will be a 64 bit version of Octave 4.2, which then also will allow to use the 64bit Dynare preprocessor (see #1306). This will hopefully also solve #1304. But this means we need to provide 64bit mex-files as well as the 32bit ones do not run.
Houtan BastaniHoutan Bastanihttps://git.dynare.org/Dynare/dynare/issues/1304Octave out of memory issues2018-11-09T14:50:08ZJohannes Pfeifer Octave out of memory issuesWhen running `observation_trends_and_prefiltering/MCMC/Trend_loglinear_no_prefilter_MC.mod` in `Octave 4.0.3` I get
```
error: out of memory or dimension too large for Octave's index type
error: called from
pm3 at line 82 column 13
prior_posterior_statistics at line 297 column 5
dynare_estimation_1 at line 462 column 13
dynare_estimation at line 105 column 5
Trend_loglinear_no_prefilter_MC at line 194 column 14
dynare at line 223 column 1
```
Given that Octave (at least on Windows) does not fully support `64 bit`, solving this could be challenging to impossible.
When running `observation_trends_and_prefiltering/MCMC/Trend_loglinear_no_prefilter_MC.mod` in `Octave 4.0.3` I get
```
error: out of memory or dimension too large for Octave's index type
error: called from
pm3 at line 82 column 13
prior_posterior_statistics at line 297 column 5
dynare_estimation_1 at line 462 column 13
dynare_estimation at line 105 column 5
Trend_loglinear_no_prefilter_MC at line 194 column 14
dynare at line 223 column 1
```
Given that Octave (at least on Windows) does not fully support `64 bit`, solving this could be challenging to impossible.
4.6Sébastien VillemotSébastien Villemothttps://git.dynare.org/Dynare/dynare/issues/1306Work around problem with identifying system architecture on Octave2019-06-19T15:37:47ZJohannes Pfeifer Work around problem with identifying system architecture on OctaveThe call to
`arch = getenv('PROCESSOR_ARCHITECTURE');`
used for PC in `dynare.m` does not work with Octave (see http://savannah.gnu.org/bugs/index.php?49289). Therefore, always the 32bit preprocessor is run.
The call to
`arch = getenv('PROCESSOR_ARCHITECTURE');`
used for PC in `dynare.m` does not work with Octave (see http://savannah.gnu.org/bugs/index.php?49289). Therefore, always the 32bit preprocessor is run.
https://git.dynare.org/Dynare/dynare/issues/1302calib_smoother doesn't recognize diffuse_filter2019-06-19T15:37:47ZMichelJuillardcalib_smoother doesn't recognize diffuse_filterIt should be possible to run the smoother with calibrated parameters for a model with non-stationary variables
It should be possible to run the smoother with calibrated parameters for a model with non-stationary variables
4.5MichelJuillardMichelJuillard