numerical_derivatives.rst 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403
  1. .. default-domain:: cpp
  2. .. cpp:namespace:: ceres
  3. .. _chapter-numerical_derivatives:
  4. ===================
  5. Numeric derivatives
  6. ===================
  7. The other extreme from using analytic derivatives is to use numeric
  8. derivatives. The key observation here is that the process of
  9. differentiating a function :math:`f(x)` w.r.t :math:`x` can be written
  10. as the limiting process:
  11. .. math::
  12. Df(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h}
  13. Forward Differences
  14. ===================
  15. Now of course one cannot perform the limiting operation numerically on
  16. a computer so we do the next best thing, which is to choose a small
  17. value of :math:`h` and approximate the derivative as
  18. .. math::
  19. Df(x) \approx \frac{f(x + h) - f(x)}{h}
  20. The above formula is the simplest most basic form of numeric
  21. differentiation. It is known as the *Forward Difference* formula.
  22. So how would one go about constructing a numerically differentiated
  23. version of ``Rat43Analytic`` (`Rat43
  24. <http://www.itl.nist.gov/div898/strd/nls/data/ratkowsky3.shtml>`_) in
  25. Ceres Solver. This is done in two steps:
  26. 1. Define *Functor* that given the parameter values will evaluate the
  27. residual for a given :math:`(x,y)`.
  28. 2. Construct a :class:`CostFunction` by using
  29. :class:`NumericDiffCostFunction` to wrap an instance of
  30. ``Rat43CostFunctor``.
  31. .. code-block:: c++
  32. struct Rat43CostFunctor {
  33. Rat43CostFunctor(const double x, const double y) : x_(x), y_(y) {}
  34. bool operator()(const double* parameters, double* residuals) const {
  35. const double b1 = parameters[0];
  36. const double b2 = parameters[1];
  37. const double b3 = parameters[2];
  38. const double b4 = parameters[3];
  39. residuals[0] = b1 * pow(1.0 + exp(b2 - b3 * x_), -1.0 / b4) - y_;
  40. return true;
  41. }
  42. const double x_;
  43. const double y_;
  44. }
  45. CostFunction* cost_function =
  46. new NumericDiffCostFunction<Rat43CostFunctor, FORWARD, 1, 4>(
  47. new Rat43CostFunctor(x, y));
  48. This is about the minimum amount of work one can expect to do to
  49. define the cost function. The only thing that the user needs to do is
  50. to make sure that the evaluation of the residual is implemented
  51. correctly and efficiently.
  52. Before going further, it is instructive to get an estimate of the
  53. error in the forward difference formula. We do this by considering the
  54. `Taylor expansion <https://en.wikipedia.org/wiki/Taylor_series>`_ of
  55. :math:`f` near :math:`x`.
  56. .. math::
  57. \begin{align}
  58. f(x+h) &= f(x) + h Df(x) + \frac{h^2}{2!} D^2f(x) +
  59. \frac{h^3}{3!}D^3f(x) + \cdots \\
  60. Df(x) &= \frac{f(x + h) - f(x)}{h} - \left [\frac{h}{2!}D^2f(x) +
  61. \frac{h^2}{3!}D^3f(x) + \cdots \right]\\
  62. Df(x) &= \frac{f(x + h) - f(x)}{h} + O(h)
  63. \end{align}
  64. i.e., the error in the forward difference formula is
  65. :math:`O(h)` [#f4]_.
  66. Implementation Details
  67. ----------------------
  68. :class:`NumericDiffCostFunction` implements a generic algorithm to
  69. numerically differentiate a given functor. While the actual
  70. implementation of :class:`NumericDiffCostFunction` is complicated, the
  71. net result is a :class:`CostFunction` that roughly looks something
  72. like the following:
  73. .. code-block:: c++
  74. class Rat43NumericDiffForward : public SizedCostFunction<1,4> {
  75. public:
  76. Rat43NumericDiffForward(const Rat43Functor* functor) : functor_(functor) {}
  77. virtual ~Rat43NumericDiffForward() {}
  78. virtual bool Evaluate(double const* const* parameters,
  79. double* residuals,
  80. double** jacobians) const {
  81. functor_(parameters[0], residuals);
  82. if (!jacobians) return true;
  83. double* jacobian = jacobians[0];
  84. if (!jacobian) return true;
  85. const double f = residuals[0];
  86. double parameters_plus_h[4];
  87. for (int i = 0; i < 4; ++i) {
  88. std::copy(parameters, parameters + 4, parameters_plus_h);
  89. const double kRelativeStepSize = 1e-6;
  90. const double h = std::abs(parameters[i]) * kRelativeStepSize;
  91. parameters_plus_h[i] += h;
  92. double f_plus;
  93. functor_(parameters_plus_h, &f_plus);
  94. jacobian[i] = (f_plus - f) / h;
  95. }
  96. return true;
  97. }
  98. private:
  99. std::unique_ptr<Rat43Functor> functor_;
  100. };
  101. Note the choice of step size :math:`h` in the above code, instead of
  102. an absolute step size which is the same for all parameters, we use a
  103. relative step size of :math:`\text{kRelativeStepSize} = 10^{-6}`. This
  104. gives better derivative estimates than an absolute step size [#f2]_
  105. [#f3]_. This choice of step size only works for parameter values that
  106. are not close to zero. So the actual implementation of
  107. :class:`NumericDiffCostFunction`, uses a more complex step size
  108. selection logic, where close to zero, it switches to a fixed step
  109. size.
  110. Central Differences
  111. ===================
  112. :math:`O(h)` error in the Forward Difference formula is okay but not
  113. great. A better method is to use the *Central Difference* formula:
  114. .. math::
  115. Df(x) \approx \frac{f(x + h) - f(x - h)}{2h}
  116. Notice that if the value of :math:`f(x)` is known, the Forward
  117. Difference formula only requires one extra evaluation, but the Central
  118. Difference formula requires two evaluations, making it twice as
  119. expensive. So is the extra evaluation worth it?
  120. To answer this question, we again compute the error of approximation
  121. in the central difference formula:
  122. .. math::
  123. \begin{align}
  124. f(x + h) &= f(x) + h Df(x) + \frac{h^2}{2!}
  125. D^2f(x) + \frac{h^3}{3!} D^3f(x) + \frac{h^4}{4!} D^4f(x) + \cdots\\
  126. f(x - h) &= f(x) - h Df(x) + \frac{h^2}{2!}
  127. D^2f(x) - \frac{h^3}{3!} D^3f(c_2) + \frac{h^4}{4!} D^4f(x) +
  128. \cdots\\
  129. Df(x) & = \frac{f(x + h) - f(x - h)}{2h} + \frac{h^2}{3!}
  130. D^3f(x) + \frac{h^4}{5!}
  131. D^5f(x) + \cdots \\
  132. Df(x) & = \frac{f(x + h) - f(x - h)}{2h} + O(h^2)
  133. \end{align}
  134. The error of the Central Difference formula is :math:`O(h^2)`, i.e.,
  135. the error goes down quadratically whereas the error in the Forward
  136. Difference formula only goes down linearly.
  137. Using central differences instead of forward differences in Ceres
  138. Solver is a simple matter of changing a template argument to
  139. :class:`NumericDiffCostFunction` as follows:
  140. .. code-block:: c++
  141. CostFunction* cost_function =
  142. new NumericDiffCostFunction<Rat43CostFunctor, CENTRAL, 1, 4>(
  143. new Rat43CostFunctor(x, y));
  144. But what do these differences in the error mean in practice? To see
  145. this, consider the problem of evaluating the derivative of the
  146. univariate function
  147. .. math::
  148. f(x) = \frac{e^x}{\sin x - x^2},
  149. at :math:`x = 1.0`.
  150. It is easy to determine that :math:`Df(1.0) =
  151. 140.73773557129658`. Using this value as reference, we can now compute
  152. the relative error in the forward and central difference formulae as a
  153. function of the absolute step size and plot them.
  154. .. figure:: forward_central_error.png
  155. :figwidth: 100%
  156. :align: center
  157. Reading the graph from right to left, a number of things stand out in
  158. the above graph:
  159. 1. The graph for both formulae have two distinct regions. At first,
  160. starting from a large value of :math:`h` the error goes down as
  161. the effect of truncating the Taylor series dominates, but as the
  162. value of :math:`h` continues to decrease, the error starts
  163. increasing again as roundoff error starts to dominate the
  164. computation. So we cannot just keep on reducing the value of
  165. :math:`h` to get better estimates of :math:`Df`. The fact that we
  166. are using finite precision arithmetic becomes a limiting factor.
  167. 2. Forward Difference formula is not a great method for evaluating
  168. derivatives. Central Difference formula converges much more
  169. quickly to a more accurate estimate of the derivative with
  170. decreasing step size. So unless the evaluation of :math:`f(x)` is
  171. so expensive that you absolutely cannot afford the extra
  172. evaluation required by central differences, **do not use the
  173. Forward Difference formula**.
  174. 3. Neither formula works well for a poorly chosen value of :math:`h`.
  175. Ridders' Method
  176. ===============
  177. So, can we get better estimates of :math:`Df` without requiring such
  178. small values of :math:`h` that we start hitting floating point
  179. roundoff errors?
  180. One possible approach is to find a method whose error goes down faster
  181. than :math:`O(h^2)`. This can be done by applying `Richardson
  182. Extrapolation
  183. <https://en.wikipedia.org/wiki/Richardson_extrapolation>`_ to the
  184. problem of differentiation. This is also known as *Ridders' Method*
  185. [Ridders]_.
  186. Let us recall, the error in the central differences formula.
  187. .. math::
  188. \begin{align}
  189. Df(x) & = \frac{f(x + h) - f(x - h)}{2h} + \frac{h^2}{3!}
  190. D^3f(x) + \frac{h^4}{5!}
  191. D^5f(x) + \cdots\\
  192. & = \frac{f(x + h) - f(x - h)}{2h} + K_2 h^2 + K_4 h^4 + \cdots
  193. \end{align}
  194. The key thing to note here is that the terms :math:`K_2, K_4, ...`
  195. are independent of :math:`h` and only depend on :math:`x`.
  196. Let us now define:
  197. .. math::
  198. A(1, m) = \frac{f(x + h/2^{m-1}) - f(x - h/2^{m-1})}{2h/2^{m-1}}.
  199. Then observe that
  200. .. math::
  201. Df(x) = A(1,1) + K_2 h^2 + K_4 h^4 + \cdots
  202. and
  203. .. math::
  204. Df(x) = A(1, 2) + K_2 (h/2)^2 + K_4 (h/2)^4 + \cdots
  205. Here we have halved the step size to obtain a second central
  206. differences estimate of :math:`Df(x)`. Combining these two estimates,
  207. we get:
  208. .. math::
  209. Df(x) = \frac{4 A(1, 2) - A(1,1)}{4 - 1} + O(h^4)
  210. which is an approximation of :math:`Df(x)` with truncation error that
  211. goes down as :math:`O(h^4)`. But we do not have to stop here. We can
  212. iterate this process to obtain even more accurate estimates as
  213. follows:
  214. .. math::
  215. A(n, m) = \begin{cases}
  216. \frac{\displaystyle f(x + h/2^{m-1}) - f(x -
  217. h/2^{m-1})}{\displaystyle 2h/2^{m-1}} & n = 1 \\
  218. \frac{\displaystyle 4^{n-1} A(n - 1, m + 1) - A(n - 1, m)}{\displaystyle 4^{n-1} - 1} & n > 1
  219. \end{cases}
  220. It is straightforward to show that the approximation error in
  221. :math:`A(n, 1)` is :math:`O(h^{2n})`. To see how the above formula can
  222. be implemented in practice to compute :math:`A(n,1)` it is helpful to
  223. structure the computation as the following tableau:
  224. .. math::
  225. \begin{array}{ccccc}
  226. A(1,1) & A(1, 2) & A(1, 3) & A(1, 4) & \cdots\\
  227. & A(2, 1) & A(2, 2) & A(2, 3) & \cdots\\
  228. & & A(3, 1) & A(3, 2) & \cdots\\
  229. & & & A(4, 1) & \cdots \\
  230. & & & & \ddots
  231. \end{array}
  232. So, to compute :math:`A(n, 1)` for increasing values of :math:`n` we
  233. move from the left to the right, computing one column at a
  234. time. Assuming that the primary cost here is the evaluation of the
  235. function :math:`f(x)`, the cost of computing a new column of the above
  236. tableau is two function evaluations. Since the cost of evaluating
  237. :math:`A(1, n)`, requires evaluating the central difference formula
  238. for step size of :math:`2^{1-n}h`
  239. Applying this method to :math:`f(x) = \frac{e^x}{\sin x - x^2}`
  240. starting with a fairly large step size :math:`h = 0.01`, we get:
  241. .. math::
  242. \begin{array}{rrrrr}
  243. 141.678097131 &140.971663667 &140.796145400 &140.752333523 &140.741384778\\
  244. &140.736185846 &140.737639311 &140.737729564 &140.737735196\\
  245. & &140.737736209 &140.737735581 &140.737735571\\
  246. & & &140.737735571 &140.737735571\\
  247. & & & &140.737735571\\
  248. \end{array}
  249. Compared to the *correct* value :math:`Df(1.0) = 140.73773557129658`,
  250. :math:`A(5, 1)` has a relative error of :math:`10^{-13}`. For
  251. comparison, the relative error for the central difference formula with
  252. the same step size (:math:`0.01/2^4 = 0.000625`) is :math:`10^{-5}`.
  253. The above tableau is the basis of Ridders' method for numeric
  254. differentiation. The full implementation is an adaptive scheme that
  255. tracks its own estimation error and stops automatically when the
  256. desired precision is reached. Of course it is more expensive than the
  257. forward and central difference formulae, but is also significantly
  258. more robust and accurate.
  259. Using Ridder's method instead of forward or central differences in
  260. Ceres is again a simple matter of changing a template argument to
  261. :class:`NumericDiffCostFunction` as follows:
  262. .. code-block:: c++
  263. CostFunction* cost_function =
  264. new NumericDiffCostFunction<Rat43CostFunctor, RIDDERS, 1, 4>(
  265. new Rat43CostFunctor(x, y));
  266. The following graph shows the relative error of the three methods as a
  267. function of the absolute step size. For Ridders's method we assume
  268. that the step size for evaluating :math:`A(n,1)` is :math:`2^{1-n}h`.
  269. .. figure:: forward_central_ridders_error.png
  270. :figwidth: 100%
  271. :align: center
  272. Using the 10 function evaluations that are needed to compute
  273. :math:`A(5,1)` we are able to approximate :math:`Df(1.0)` about a 1000
  274. times better than the best central differences estimate. To put these
  275. numbers in perspective, machine epsilon for double precision
  276. arithmetic is :math:`\approx 2.22 \times 10^{-16}`.
  277. Going back to ``Rat43``, let us also look at the runtime cost of the
  278. various methods for computing numeric derivatives.
  279. ========================== =========
  280. CostFunction Time (ns)
  281. ========================== =========
  282. Rat43Analytic 255
  283. Rat43AnalyticOptimized 92
  284. Rat43NumericDiffForward 262
  285. Rat43NumericDiffCentral 517
  286. Rat43NumericDiffRidders 3760
  287. ========================== =========
  288. As expected, Central Differences is about twice as expensive as
  289. Forward Differences and the remarkable accuracy improvements of
  290. Ridders' method cost an order of magnitude more runtime.
  291. Recommendations
  292. ===============
  293. Numeric differentiation should be used when you cannot compute the
  294. derivatives either analytically or using automatic differentiation. This
  295. is usually the case when you are calling an external library or
  296. function whose analytic form you do not know or even if you do, you
  297. are not in a position to re-write it in a manner required to use
  298. :ref:`chapter-automatic_derivatives`.
  299. When using numeric differentiation, use at least Central Differences,
  300. and if execution time is not a concern or the objective function is
  301. such that determining a good static relative step size is hard,
  302. Ridders' method is recommended.
  303. .. rubric:: Footnotes
  304. .. [#f2] `Numerical Differentiation
  305. <https://en.wikipedia.org/wiki/Numerical_differentiation#Practical_considerations_using_floating_point_arithmetic>`_
  306. .. [#f3] [Press]_ Numerical Recipes, Section 5.7
  307. .. [#f4] In asymptotic error analysis, an error of :math:`O(h^k)`
  308. means that the absolute-value of the error is at most some
  309. constant times :math:`h^k` when :math:`h` is close enough to
  310. :math:`0`.