Skip to content
Snippets Groups Projects
Select Git revision
  • 7acf278370f2166c36ba832f97d09c6813a24e73
  • master default protected
  • python-visitor
  • python-codegen
  • 6.x protected
  • julia protected
  • llvm-15
  • 5.x protected
  • 4.6 protected
  • uop
  • rework_pac
  • aux_vars_fix
  • julia-7.0.0
  • julia-6.4.0
  • julia-6.3.0
  • julia-6.2.0
16 results

preprocessor

Sébastien Villemot's avatar
Sébastien Villemot authored
Commit 23b0c12d introduced caching in chain
rule derivation (used by block decomposition), which increased speed for mfs >
0, but actually decreased it for mfs=0.

This patch introduces the pre-computation of derivatives which are known to be
zero using symbolic a priori (similarly to what is done in the non-chain rule
context). The algorithms are now identical between the two contexts (both
symbolic a priori + caching), the difference being that in the chain rule
context, the symbolic a priori and the cache are not stored within the ExprNode
class, since they depend on the list of recursive variables.

This patch brings a significant performant improvement for all values of the
“mfs” option (the improvement is greater for small values of “mfs”).
7acf2783
History

Dynare Preprocessor

The Dynare Preprocessor defines the Dynare model language. It takes in a .mod file, computes the derivatives of the model represented therein, and produces MATLAB/Octave, Julia, or JSON output.

License

Most of the source files are covered by the GNU General Public Licence version 3 or later. There are some exceptions, see the respective file headers.