Document optimizers allowing analytic_derivation and fix bugs

The manual does not state which optimizers employ the analytic gradient. It should be at least
fmincon
(1),csminwel
(4),newrat
(5). 
csminwel
andnewrat
are the two optimizers not shipped with e.g. Matlab. They rely on calls to[~,cost_flag,g1] = penalty_objective_function(x1,fcn,penalty,varargin{:});
where the third output argument is the gradient. Within
penalty_objective_function
we then have[fval, info, exit_flag, arg1, arg2] = fcn(x, varargin{:});
where the gradient
arg1
is the fourth and the Hessianarg2
the fifth output argument of the underlying objective function. For examplefunction [fval,info,exit_flag,DLIK,Hess,SteadyState,trend_coeff,Model,DynareOptions,BayesInfo,DynareResults] = dsge_likelihood(xparam1,DynareDataset,DatasetInfo,DynareOptions,Model,EstimatedParameters,BayesInfo,BoundsInfo,DynareResults,derivatives_info)
This interface creates a problem for Matlab optimizers like
fmincon
, which expect the Jacobian as second and the Hessian as the third function output. Neither the underlying objective norpenalty_objective_function
conform to this convention. The question is how to address this issue. There are two ways: Add a wrapper function along the line of
penalty_objective_function
, which introduces another layer but would be quite easy to implement.  Change the output order of the objective function. This would be cleaner and more efficient, but would require quite massive changes in the codebase
 Add a wrapper function along the line of

The current implementation of
analytic_derivation
forfmincon
indynare_minimize_objective
is buggy as it considers the second outputinfo
to be the gradient. 
It's not clear whether the treatment of the analytic Jacobian in case of the penalty approach is correct as the Jacobian does not take the penalty into account. We need to check whether these cases are filtered out via
cost_flag