diff --git a/doc/dynare.texi b/doc/dynare.texi
index 1679838fc6933de78d4dce1dabe4dfdc96e9687d..c6adf6f22a27905dfa78483914365717dddbd8a1 100644
--- a/doc/dynare.texi
+++ b/doc/dynare.texi
@@ -8052,11 +8052,10 @@ end;
 This command computes the first order approximation of the policy that
 maximizes the policy maker's objective function subject to the
 constraints provided by the equilibrium path of the private economy and under 
-commitment to this optimal policy. Following @cite{Woodford (1999)}, the Ramsey 
-policy is computed using a timeless perspective. That is, the government forgoes 
-its first-period advantage and does not exploit the preset privates sector expectations 
-(which are the source of the well-known time inconsistency that requires the 
-assumption of commitment). Rather, it acts as if the initial multipliers had 
+commitment to this optimal policy. The Ramsey policy is computed
+by approximating the equilibrium system around the perturbation point where the 
+Lagrange multipliers are at their steady state, i.e. where the Ramsey planner acts 
+as if the initial multipliers had 
 been set to 0 in the distant past, giving them time to converge to their steady 
 state value. Consequently, the optimal decision rules are computed around this steady state 
 of the endogenous variables and the Lagrange multipliers.
@@ -8113,14 +8112,16 @@ In addition, it stores the value of planner objective function under
 Ramsey policy in @code{oo_.planner_objective_value}, given the initial values 
 of the endogenous state variables. If not specified with @code{histval}, they are 
 taken to be at their steady state values. The result is a 1 by 2 
-vector, where the first entry stores the value of the planner objective under 
-the timeless perspective to Ramsey policy, i.e. where the initial Lagrange
+vector, where the first entry stores the value of the planner objective when the initial Lagrange
 multipliers associated with the planner's problem are set to their steady state
 values (@pxref{ramsey_policy}).
+
 In contrast, the second entry stores the value of the planner objective with 
 initial Lagrange multipliers of the planner's problem set to 0, i.e. it is assumed 
-that the planner succumbs to the temptation to exploit the preset private expecatations 
-in the first period (but not in later periods due to commitment).
+that the planner exploits its ability to surprise private agents in the first
+period of implementing Ramsey policy. This is the value of implementating
+optimal policy for the first time and committing not to re-optimize in the future.
+
 Because it entails computing at least a second order approximation, this
 computation is skipped with a message when the model is too large (more than 180 state
 variables, including lagged Lagrange multipliers).
@@ -14392,11 +14393,6 @@ Villemot, Sébastien (2011): ``Solving rational expectations models at
 first order: what Dynare does,'' @i{Dynare Working Papers}, 2,
 CEPREMAP
 
-@item
-Woodford, Michael (2011): ``Commentary: How Should Monetary Policy Be 
-Conducted in an Era of Price Stability?'' @i{Proceedings - Economic Policy Symposium - Jackson Hole}, 
-277-316
-
 
 @end itemize