Dynare++: improvements to comments (last batch!)

[skip ci]
parent 7599bc10
Pipeline #1349 skipped
This diff is collapsed.
......@@ -3,34 +3,33 @@
// Global check
/* The purpose of this file is to provide classes for checking error of
approximation. If $y_t=g(y^*_{t-1},u)$ is an approximate solution,
then we check for the error of residuals of the system equations. Let
$F(y^*,u,u')=f(g^{**}(g^*(y^*,u'),u),g(y^*,u),y^*,u)$, then we
calculate integral
$$E[F(y^*,u,u')]$$@q'@>
which we want to be zero for all $y^*$, and $u$.
There are a few possibilities how and where the integral is
evaluated. Currently we offer the following:
\numberedlist
\li Along shocks. The $y^*$ is set to steady state, and $u$ is set to
zero but one element is going from minus through plus shocks in few
steps. The user gives the scaling factor, for instance interval
$\langle<-3\sigma,3\sigma\rangle$ (where $sigma$ is a standard error
of the shock), and a number of steps. This is repeated for each shock
(element of the $u$ vector).
\li Along simulation. Some random simulation is run, and for each
realization of $y^*$ and $u$ along the path we evaluate the residual.
\li On ellipse. Let $V=AA^T$ be a covariance matrix of the
predetermined variables $y^*$ based on linear approximation, then we
calculate integral for points on the ellipse $\{Ax\vert\, \Vert
x\Vert_2=1\}$. The points are selected by means of low discrepancy
method and polar transformation. The shock $u$ are zeros.
\li Unconditional distribution.
\endnumberedlist */
approximation. If yₜ=g(y*ₜ₋₁,u) is an approximate solution, then we check
for the error of residuals of the system equations. Let
F(y*,u,u′)=f(g**(g*(y*,u′),u),g(y*,u),y*,u), then we calculate the integral:
𝔼ₜ[F(y*,u,u′)]
which we want to be zero for all y* and u.
There are a few possibilities how and where the integral is evaluated.
Currently we offer the following ones:
— Along shocks. The y* is set to the steady state, and u is set to zero but
one element is going from minus through plus shocks in few steps. The user
gives the scaling factor, for instance the interval [−3σ,3σ] (where σ is
one standard error of the shock), and a number of steps. This is repeated
for each shock (element of the u vector).
— Along simulation. Some random simulation is run, and for each realization
of y* and u along the path we evaluate the residual.
— On ellipse. Let V=AAᵀ be a covariance matrix of the predetermined
variables y* based on linear approximation, then we calculate integral for
points on the ellipse { Ax | ‖x‖₂=1 }. The points are selected by means of
low discrepancy method and polar transformation. The shock u are zeros.
— Unconditional distribution.
*/
#ifndef GLOBAL_CHECK_H
#define GLOBAL_CHECK_H
......@@ -46,23 +45,22 @@
#include "journal.hh"
#include "approximation.hh"
/* This is a class for implementing |VectorFunction| interface
evaluating the residual of equations, this is
$$F(y^*,u,u')=f(g^{**}(g^*(y^*,u),u'),y^*,u)$$
is written as a function of $u'$.
/* This is a class for implementing the VectorFunction interface evaluating the
residual of equations, this is F(y*,u,u′)=f(g**(g*(y*,u),u′),y*,u) is
written as a function of u′.
When the object is constructed, one has to specify $(y^*,u)$, this is
done by |setYU| method. The object has basically two states. One is
after construction and before call to |setYU|. The second is after
call |setYU|. We distinguish between the two states, an object in the
second state contains |yplus|, |ystar|, |u|, and |hss|.
When the object is constructed, one has to specify (y*,u), this is done by
the setYU() method. The object has basically two states. One is after
construction and before the call to setYU(). The second is after the call to
setYU(). We distinguish between the two states, an object in the second
state contains ‘yplus’, ‘ystar’, ‘u’, and ‘hss’.
The vector |yplus| is $g^*(y^*,u)$. |ystar| is $y^*$, and polynomial
|hss| is partially evaluated $g^**(yplus, u)$.
The vector ‘yplus’ is g*(y*,u). ‘ystar’ is y*, and polynomial ‘hss’ is
partially evaluated g**(yplus, u).
The pointer to |DynamicModel| is important, since the |DynamicModel|
evaluates the function $f$. When copying the object, we have to make
also a copy of |DynamicModel|. */
The pointer to DynamicModel is important, since the DynamicModel evaluates
the function f. When copying the object, we have to make also a copy of
DynamicModel. */
class ResidFunction : public VectorFunction
{
......@@ -84,7 +82,7 @@ public:
void setYU(const ConstVector &ys, const ConstVector &xx);
};
/* This is a |ResidFunction| wrapped with |GaussConverterFunction|. */
/* This is a ResidFunction wrapped with GaussConverterFunction. */
class GResidFunction : public GaussConverterFunction
{
......@@ -105,17 +103,16 @@ public:
}
};
/* This is a class encapsulating checking algorithms. Its core routine
is |check|, which calculates integral $E[F(y^*,u,u')\vert y^*,u]$ for
given realizations of $y^*$ and $u$. The both are given in
matrices. The methods checking along shocks, on ellipse and anlong a
simulation path, just fill the matrices and call the core |check|.
/* This is a class encapsulating checking algorithms. Its core routine is
check(), which calculates integral 𝔼[F(y*,u,u′) | y*,u] for given
realizations of y* and u. The both are given in matrices. The methods
checking along shocks, on ellipse and anlong a simulation path, just fill
the matrices and call the core check().
The method |checkUnconditionalAndSave| evaluates unconditional
$E[F(y,u,u')]$.
The method checkUnconditionalAndSave() evaluates unconditional 𝔼[F(y,u,u′)].
The object also maintains a set of |GResidFunction| functions |vfs| in
order to save (possibly expensive) copying of |DynamicModel|s. */
The object also maintains a set of GResidFunction functions ‘vfs’ in order
to save (possibly expensive) copying of DynamicModel’s. */
class GlobalChecker
{
......
......@@ -95,7 +95,7 @@ SystemResources::diff(const SystemResources &pre)
majflt -= pre.majflt;
}
// |JournalRecord::operator<<| symmetry code
// JournalRecord::operator<<() symmetry code
JournalRecord &
JournalRecord::operator<<(const IntSequence &s)
{
......
......@@ -52,7 +52,7 @@ public:
}
};
// |KordException| error code definitions
// KordException error code definitions
constexpr int KORD_FP_NOT_CONV = 254;
constexpr int KORD_FP_NOT_FINITE = 253;
constexpr int KORD_MD_NOT_STABLE = 252;
......
......@@ -3,9 +3,9 @@
#include "kord_exception.hh"
#include "korder.hh"
/* Here we set |ipiv| and |inv| members of the |PLUMatrix| depending on
its content. It is assumed that subclasses will call this method at
the end of their constructors. */
/* Here we set ‘ipiv’ and ‘inv’ members of the PLUMatrix depending on its
content. It is assumed that subclasses will call this method at the end of
their constructors. */
void
PLUMatrix::calcPLU()
......@@ -34,11 +34,12 @@ PLUMatrix::multInv(TwoDMatrix &m) const
"Info!=0 in PLUMatrix::multInv");
}
/* Here we construct the matrix $A$. Its dimension is |ny|, and it is
$$A=\left[f_{y}\right]+
\left[0 \left[f_{y^{**}_+}\right]\cdot\left[g^{**}_{y^*}\right] 0\right]$$,
where the first zero spans |nstat| columns, and last zero spans
|nforw| columns. */
/* Here we construct the matrix A. Its dimension is ‘ny’, and it is
A=[f_y] + [0 [f_y**₊]·[g**_y*] 0],
where the first zero spans ‘nstat’ columns, and last zero spans ‘nforw’
columns. */
MatrixA::MatrixA(const FSSparseTensor &f, const IntSequence &ss,
const TwoDMatrix &gy, const PartitionY &ypart)
......@@ -59,13 +60,12 @@ MatrixA::MatrixA(const FSSparseTensor &f, const IntSequence &ss,
calcPLU();
}
/* Here we construct the matrix $S$. Its dimension is |ny|, and it is
$$S=\left[f_{y}\right]+
\left[0\quad\left[f_{y^{**}_+}\right]\cdot\left[g^{**}_{y^*}\right]\quad
0\right]+ \left[0\quad 0\quad\left[f_{y^{**}_+}\right]\right]$$
It is, in fact, the matrix $A$ plus the third summand. The first zero
in the summand spans |nstat| columns, the second zero spans |npred|
columns. */
/* Here we construct the matrix S. Its dimension is ‘ny’, and it is
S = [f_y] + [0 [f_y**₊]·[g**_y*] 0] + [0 0 [f_y**₊]]
It is, in fact, the matrix A plus the third summand. The first zero in the
summand spans ‘nstat’ columns, the second zero spans ‘npred’ columns. */
MatrixS::MatrixS(const FSSparseTensor &f, const IntSequence &ss,
const TwoDMatrix &gy, const PartitionY &ypart)
......@@ -89,7 +89,7 @@ MatrixS::MatrixS(const FSSparseTensor &f, const IntSequence &ss,
calcPLU();
}
// |KOrder| member access method specializations
// KOrder member access method specializations
/* These are the specializations of container access methods. Nothing
interesting here. */
......@@ -206,17 +206,17 @@ template<>
const ctraits<Storage::fold>::Tm& KOrder::m<Storage::fold>() const
{ return _fm;}
/* Here is the constructor of the |KOrder| class. We pass what we have
to. The partitioning of the $y$ vector, a sparse container with model
derivatives, then the first order approximation, these are $g_y$ and
$g_u$ matrices, and covariance matrix of exogenous shocks |v|.
/* Here is the constructor of the KOrder class. We pass what we have to. The
partitioning of the y vector, a sparse container with model derivatives,
then the first order approximation, these are g_y and gᵤ matrices, and
covariance matrix of exogenous shocks ‘v’.
We build the members, it is nothing difficult. Note that we do not make
a physical copy of sparse tensors, so during running the class, the
outer world must not change them.
We build the members, it is nothing difficult. Note that we do not make a
physical copy of sparse tensors, so during running the class, the outer
world must not change them.
In the body, we have to set |nvs| array, and initialize $g$ and $G$
containers to comply to preconditions of |performStep|. */
In the body, we have to set ‘nvs’ array, and initialize g and G containers
to comply to preconditions of performStep(). */
KOrder::KOrder(int num_stat, int num_pred, int num_both, int num_forw,
const TensorContainer<FSSparseTensor> &fcont,
......@@ -252,9 +252,9 @@ KOrder::KOrder(int num_stat, int num_pred, int num_both, int num_forw,
KORD_RAISE_IF(gu.ncols() != nu,
"Wrong number of columns in gu in KOrder constructor");
// put $g_y$ and $g_u$ to the container
/* Note that $g_\sigma$ is zero by the nature and we do not insert it to
the container. We insert a new physical copies. */
// Put g_y and gᵤ in the container
/* Note that g_σ is zero by construction and we do not insert it to the
container. We insert a new physical copies. */
auto tgy = std::make_unique<UGSTensor>(ny, TensorDimens(Symmetry{1, 0, 0, 0}, nvs));
tgy->getData() = gy.getData();
insertDerivative<Storage::unfold>(std::move(tgy));
......@@ -262,8 +262,8 @@ KOrder::KOrder(int num_stat, int num_pred, int num_both, int num_forw,
tgu->getData() = gu.getData();
insertDerivative<Storage::unfold>(std::move(tgu));
// put $G_y$, $G_u$ and $G_{u'}$ to the container
/* Also note that since $g_\sigma$ is zero, so $G_\sigma$. */
// Put G_y, G_u and G_u′ in the container
/* Also note that since g_σ is zero, so is G_σ. */
auto tGy = faaDiBrunoG<Storage::unfold>(Symmetry{1, 0, 0, 0});
G<Storage::unfold>().insert(std::move(tGy));
auto tGu = faaDiBrunoG<Storage::unfold>(Symmetry{0, 1, 0, 0});
......@@ -272,19 +272,18 @@ KOrder::KOrder(int num_stat, int num_pred, int num_both, int num_forw,
G<Storage::unfold>().insert(std::move(tGup));
}
// |KOrder::sylvesterSolve| unfolded specialization
/* Here we have an unfolded specialization of |sylvesterSolve|. We
simply create the sylvester object and solve it. Note that the $g^*_y$
is not continuous in memory as assumed by the sylvester code, so we
make a temporary copy and pass it as matrix $C$.
// KOrder::sylvesterSolve() unfolded specialization
/* Here we have an unfolded specialization of sylvesterSolve(). We simply
create the sylvester object and solve it. Note that the g*_y is not
continuous in memory as assumed by the sylvester code, so we make a
temporary copy and pass it as matrix C.
If the $B$ matrix is empty, in other words there are now forward
looking variables, then the system becomes $AX=D$ which is solved by
simple |matA.multInv()|.
If the B matrix is empty, in other words there are now forward looking
variables, then the system becomes AX=D which is solved by simple
matA.multInv().
If one wants to display the diagnostic messages from the Sylvester
module, then after the |sylv.solve()| one needs to call
|sylv.getParams().print("")|. */
If one wants to display the diagnostic messages from the Sylvester module,
then after the sylv.solve() one needs to call sylv.getParams().print(""). */
template<>
void
......@@ -304,15 +303,13 @@ KOrder::sylvesterSolve<Storage::unfold>(ctraits<Storage::unfold>::Ttensor &der)
sylv.solve();
}
else if (ypart.nys() > 0 && ypart.nyss() == 0)
{
matA.multInv(der);
}
matA.multInv(der);
}
// |KOrder::sylvesterSolve| folded specialization
/* Here is the folded specialization of sylvester. We unfold the right
hand side. Then we solve it by the unfolded version of
|sylvesterSolve|, and fold it back and copy to output vector. */
// KOrder::sylvesterSolve() folded specialization
/* Here is the folded specialization of sylvester. We unfold the right hand
side. Then we solve it by the unfolded version of sylvesterSolve(), and fold
it back and copy to output vector. */
template<>
void
......@@ -350,9 +347,7 @@ KOrder::switchToFolded()
auto ft = std::make_unique<FGSTensor>(G<Storage::unfold>().get(si));
G<Storage::fold>().insert(std::move(ft));
if (dim > 1)
{
G<Storage::fold>().remove(si);
}
G<Storage::fold>().remove(si);
}
}
}
This diff is collapsed.
......@@ -2,8 +2,8 @@
#include "korder_stoch.hh"
/* Same as |@<|MatrixA| constructor code@>|, but the submatrix |gss_ys| is
passed directly. */
/* Same as MatrixA constructor, but the submatrix ‘gss_ys’ is passed
directly. */
MatrixAA::MatrixAA(const FSSparseTensor &f, const IntSequence &ss,
const TwoDMatrix &gss_ys, const PartitionY &ypart)
......@@ -23,7 +23,7 @@ MatrixAA::MatrixAA(const FSSparseTensor &f, const IntSequence &ss,
calcPLU();
}
// |KOrderStoch| folded constructor code
// KOrderStoch folded constructor code
KOrderStoch::KOrderStoch(const PartitionY &yp, int nu,
const TensorContainer<FSSparseTensor> &fcont,
const FGSContainer &hh, Journal &jr)
......@@ -40,7 +40,7 @@ KOrderStoch::KOrderStoch(const PartitionY &yp, int nu,
{
}
// |KOrderStoch| unfolded constructor code
// KOrderStoch unfolded constructor code
KOrderStoch::KOrderStoch(const PartitionY &yp, int nu,
const TensorContainer<FSSparseTensor> &fcont,
const UGSContainer &hh, Journal &jr)
......@@ -57,7 +57,7 @@ KOrderStoch::KOrderStoch(const PartitionY &yp, int nu,
{
}
// |KOrderStoch| convenience method specializations
// KOrderStoch convenience method specializations
template<>
ctraits<Storage::unfold>::Tg &KOrderStoch::g<Storage::unfold>()
{
......
This diff is collapsed.
......@@ -3,7 +3,7 @@
#include "normal_conjugate.hh"
#include "kord_exception.hh"
// |NormalConj| diffuse prior constructor
// NormalConj diffuse prior constructor
NormalConj::NormalConj(int d)
: mu(d), kappa(0), nu(-1), lambda(d, d)
{
......@@ -11,7 +11,7 @@ NormalConj::NormalConj(int d)
lambda.zeros();
}
// |NormalConj| data update constructor
// NormalConj data update constructor
NormalConj::NormalConj(const ConstTwoDMatrix &ydata)
: mu(ydata.nrows()), kappa(ydata.ncols()), nu(ydata.ncols()-1),
lambda(ydata.nrows(), ydata.nrows())
......@@ -29,14 +29,21 @@ NormalConj::NormalConj(const ConstTwoDMatrix &ydata)
}
}
// |NormalConj::update| one observation code
// NormalConj::update() one observation code
/* The method performs the following:
$$\eqalign{
\mu_1 = &\; {\kappa_0\over \kappa_0+1}\mu_0 + {1\over \kappa_0+1}y\cr
\kappa_1 = &\; \kappa_0 + 1\cr
\nu_1 = &\; \nu_0 + 1\cr
\Lambda_1 = &\; \Lambda_0 + {\kappa_0\over\kappa_0+1}(y-\mu_0)(y-\mu_0)^T,
}$$ */
κ₀ 1
μ₁ = ──── μ₀ + ──── y
κ₀+1 κ₀+1
κ₁ = κ₀ + 1
ν₁ = ν₀ + 1
κ₀
Λ₁ = Λ₀ + ──── (y − μ₀)(y − μ₀)ᵀ
κ₀+1
*/
void
NormalConj::update(const ConstVector &y)
{
......@@ -54,7 +61,7 @@ NormalConj::update(const ConstVector &y)
nu++;
}
// |NormalConj::update| multiple observations code
// NormalConj::update() multiple observations code
/* The method evaluates the formula in the header file. */
void
NormalConj::update(const ConstTwoDMatrix &ydata)
......@@ -63,7 +70,7 @@ NormalConj::update(const ConstTwoDMatrix &ydata)
update(nc);
}
// |NormalConj::update| with |NormalConj| code
// NormalConj::update() with NormalConj code
void
NormalConj::update(const NormalConj &nc)
{
......@@ -82,9 +89,9 @@ NormalConj::update(const NormalConj &nc)
nu = nu + nc.kappa;
}
/* This returns ${1\over \nu-d-1}\Lambda$, which is the mean of the
variance in the posterior distribution. If the number of degrees of
freedom is less than $d$, then NaNs are returned. */
/* This returns 1/(ν−d−1)·Λ, which is the mean of the variance in the posterior
distribution. If the number of degrees of freedom is less than d, then NaNs
are returned. */
void
NormalConj::getVariance(TwoDMatrix &v) const
{
......
......@@ -3,40 +3,51 @@
// Conjugate family for normal distribution
/* The main purpose here is to implement a class representing conjugate
distributions for mean and variance of the normal distribution. The
class has two main methods: the first one is to update itself with
respect to one observation, the second one is to update itself with
respect to anothe object of the class. In the both methods, the
previous state of the class corresponds to the prior distribution, and
the final state corresponds to the posterior distribution.
The algrebra can be found in Gelman, Carlin, Stern, Rubin (p.87). It
goes as follows: Prior conjugate distribution takes the following form:
$$\eqalign{
\Sigma \sim& {\rm InvWishart}_{\nu_0}(\Lambda_0^{-1}) \cr
\mu\vert\Sigma \sim& N(\mu_0,\Sigma/\kappa_0)
}$$
If the observations are $y_1\ldots y_n$, then the posterior distribution has the
same form with the following parameters:
$$\eqalign{
\mu_n = &\; {\kappa_0\over \kappa_0+n}\mu_0 + {n\over \kappa_0+n}\bar y\cr
\kappa_n = &\; \kappa_0 + n\cr
\nu_n = &\; \nu_0 + n\cr
\Lambda_n = &\; \Lambda_0 + S + {\kappa_0 n\over\kappa_0+n}(\bar y-\mu_0)(\bar y-\mu_0)^T,
}$$
distributions for mean and variance of the normal distribution. The class
has two main methods: the first one is to update itself with respect to one
observation, the second one is to update itself with respect to anothe
object of the class. In the both methods, the previous state of the class
corresponds to the prior distribution, and the final state corresponds to
the posterior distribution.
The algebra can be found in Gelman, Carlin, Stern, Rubin (p.87). It goes as
follows. Prior conjugate distribution takes the following form:
Σ ↝ InvWishart_ν₀(Λ₀⁻¹)
μ|Σ ↝ 𝒩(μ₀,Σ/κ₀)
If the observations are y₁…yₙ, then the posterior distribution has the same
form with the following parameters:
κ₀ n
μₙ = ──── μ₀ + ──── ȳ
κ₀+n κ₀+n
κₙ = κ₀ + n
νₙ = ν₀ + n
κ₀·n
Λₙ = Λ₀ + S + ──── (ȳ − μ₀)(ȳ − μ₀)ᵀ
κ₀+n
where
$$\eqalign{
\bar y = &\; {1\over n}\sum_{i=1}^ny_i\cr
S = &\; \sum_{i=1}^n(y_i-\bar y)(y_i-\bar y)^T
}$$ */
1 ₙ
ȳ = ─ ∑ yᵢ
n ⁱ⁼¹
S = ∑ (yᵢ − ȳ)(yᵢ − ȳ)ᵀ
ⁱ⁼¹
*/
#ifndef NORMAL_CONJUGATE_H
#define NORMAL_CONJUGATE_H
#include "twod_matrix.hh"
/* The class is described by the four parameters: $\mu$, $\kappa$, $\nu$ and
$\Lambda$. */
/* The class is described by the four parameters: μ, κ, ν and Λ. */
class NormalConj
{
......@@ -47,11 +58,10 @@ protected:
TwoDMatrix lambda;
public:
/* We provide the following constructors: The first constructs diffuse
(Jeffrey's) prior. It sets $\kappa$, and $\Lambda$ to zeros, $nu$ to
$-1$ and also the mean $\mu$ to zero (it should not be
referenced). The second constructs the posterior using the diffuse
prior and the observed data (columnwise). The third is a copy
constructor. */
(Jeffrey’s) prior. It sets κ and Λ to zeros, ν to −1 and also the mean μ
to zero (it should not be referenced). The second constructs the posterior
using the diffuse prior and the observed data (columnwise). The third is a
copy constructor. */
NormalConj(int d);
NormalConj(const ConstTwoDMatrix &ydata);
NormalConj(const NormalConj &) = default;
......
......@@ -126,7 +126,7 @@ const double gu_data [] = { // just some numbers, no structure
-1.2421792262, -1.0724161722, -0.4276904972, 0.1801494950, -2.0716473264
};
const double vdata2 [] = { // 10x10 positive definite
const double vdata2 [] = { // 10×10 positive definite
0.79666, -0.15536, 0.05667, -0.21026, 0.20262, 0.28505, 0.60341, -0.09703, 0.32363, 0.13299,
-0.15536, 0.64380, -0.01131, 0.00980, 0.03755, 0.43791, 0.21784, -0.31755, -0.55911, -0.29655,
0.05667, -0.01131, 0.56165, -0.34357, -0.40584, 0.20990, 0.28348, 0.20398, -0.19856, 0.35820,
......@@ -139,7 +139,7 @@ const double vdata2 [] = { // 10x10 positive definite
0.13299, -0.29655, 0.35820, -0.31560, -0.12919, -0.02155, -0.19016, 0.41750, -0.12992, 0.89608
};
const double gy_data2 [] = { // 600 items make gy 30x20, whose gy(6:25,:) has spectrum within unit
const double gy_data2 [] = { // 600 items make gy 30×20, whose gy(6:25,:) has spectrum within unit
0.39414, -0.29766, 0.08948, -0.19204, -0.00750, 0.21159, 0.05494, 0.06225, 0.01771, 0.21913,
-0.01373, 0.20086, -0.06086, -0.10955, 0.14424, -0.08390, 0.03948, -0.14713, 0.11674, 0.05091,
0.24039, 0.28307, -0.11835, 0.13030, 0.11682, -0.27444, -0.19311, -0.16654, 0.12867, 0.25116,
......@@ -293,7 +293,7 @@ TestRunnable::korder_unfold_fold(int maxdim, int unfold_dim,
<< std::defaultfloat;
Journal jr("out.txt");
KOrder kord(nstat, npred, nboth, nforw, c, gy, gu, v, jr);
// perform unfolded steps until unfold_dim
// Perform unfolded steps until unfold_dim
double maxerror = 0.0;
for (int d = 2; d <= unfold_dim; d++)
{
......@@ -311,7 +311,7 @@ TestRunnable::korder_unfold_fold(int maxdim, int unfold_dim,
<< std::endl;
maxerror = std::max(err, maxerror);
}
// perform folded steps until maxdim
// Perform folded steps until maxdim
if (unfold_dim < maxdim)
{
clock_t swtime = clock();
......@@ -362,7 +362,7 @@ public:
}
};
// same dimension as Smets & Wouters
// Same dimension as Smets & Wouters
class UnfoldKOrderSW : public TestRunnable
{
public:
......@@ -415,12 +415,12 @@ int
main()
{
std::vector<std::unique_ptr<TestRunnable>> all_tests;
// fill in vector of all tests
// Fill in vector of all tests
all_tests.push_back(std::make_unique<UnfoldKOrderSmall>());
all_tests.push_back(std::make_unique<UnfoldKOrderSW>());
all_tests.push_back(std::make_unique<UnfoldFoldKOrderSW>());
// find maximum dimension and maximum nvar
// Find maximum dimension and maximum nvar
int dmax = 0;
int nvmax = 0;
for (const auto &test : all_tests)
......@@ -432,7 +432,7 @@ main()
}
TLStatic::init(dmax, nvmax); // initialize library
// launch the tests
// Launch the tests
int success = 0;
for (const auto &test : all_tests)
{
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment