|
|
mode_compute=6 algorithm
|
|
|
------------------------
|
|
|
|
|
|
In some situations the posterior mode (that will be used to initialize
|
|
|
the Metropolis Hastings (MH) and to define the jumping distribution)
|
|
|
is hard to obtain with standard (gradient based or not) optimization
|
|
|
routines. Option `mode_compute = 6`, triggers a
|
|
|
Monte-Carlo based optimization routine. Actually, it is not a true
|
|
|
optimization routine, the goal is only to identify an interesting
|
|
|
region to start the Metropolis Hastings algorithm and an initial
|
|
|
estimate of the posterior covariance matrix for the estimated
|
|
|
parameters. The MH algorithm does not need to start from the posterior
|
|
|
mode to converge to the posterior distribution. It is only required to
|
|
|
start from a point (in parameters space) with a high posterior density
|
|
|
value and to use an estimate of the covariance matrix for the jumping
|
|
|
distribution.
|
|
|
|
|
|
In practice Dynare minimizes the opposite of the logged posterior
|
|
|
kernel. Very often the default optimization algorithm fails to find a
|
|
|
minimum with a positive definite definite hessian matrix. Consequently
|
|
|
Dynare cannot run the MH, since a positive hessian matrix is required
|
|
|
to approximate the posterior covariance matrix (the inverse of the
|
|
|
Hessian matrix provides an accurate approximation of the covariance
|
|
|
matrix if the posterior distribution is not too far from a Gaussian
|
|
|
distribution).
|
|
|
|
|
|
The `mode_compute=6` algorithm uses a Metropolis Hastings algorithm
|
|
|
with a diagonal covariance matrix (prior variances or a covariance
|
|
|
matrix proportional to unity) and continuously updates the posterior
|
|
|
covariance matrix and the posterior mode estimates through the MH
|
|
|
draws. After each MH draw $`\theta_t`$ in the posterior distribution the
|
|
|
posterior mean, the posterior covariance and the posterior mode are
|
|
|
updated as follows:
|
|
|
```math
|
|
|
\mu_t = \mu_{t-1} + \frac{1}{t}\left(\theta_t-\mu_{t-1}\right)
|
|
|
```
|
|
|
```math
|
|
|
\Sigma_t = \Sigma_{t-1} + \mu_{t-1}\mu_{t-1}'-\mu_{t}\mu_{t}'+
|
|
|
\frac{1}{t}\left(\theta_t\theta_t'-\Sigma_{t-1}-\mu_{t-1}\mu_{t-1}'\right)
|
|
|
```
|
|
|
and
|
|
|
```math
|
|
|
\mathrm{mode}_t =
|
|
|
\begin{cases}
|
|
|
\theta_t, & \text{if } p(\theta_t|\mathcal Y) > p(\mathrm{mode}_{t-1}|\mathcal Y) \\
|
|
|
\mathrm{mode}_{t-1}, & \text{otherwise.}
|
|
|
\end{cases}
|
|
|
```
|
|
|
where $`p(\bullet, \mathcal Y)`$ is the posterior density or kernel. Following options are available:
|
|
|
- `options_.gmhmaxlik.iterations` = [integer] to call repeatedly the optimization routine. Improves the estimates of the posterior covariance matrix and of the posterior mode. Default is 3.
|
|
|
- `options_.gmhmaxlik.number` = [integer] sets the number of simulations used in the estimation of the covariance matrix. Default is 20000.
|
|
|
- `options_.gmhmaxlik.nscale` = [integer] sets the number of simulations used when tuning the scale parameter. Default is 200000.
|
|
|
- `options_.gmhmaxlik.nclimb` = [integer] sets the number of simulations used when climbing the hill (last step). Default is 200000. |
|
|
\ No newline at end of file |