nnls_tutorial.rst 43 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045
  1. .. highlight:: c++
  2. .. default-domain:: cpp
  3. .. cpp:namespace:: ceres
  4. .. _chapter-nnls_tutorial:
  5. ========================
  6. Non-linear Least Squares
  7. ========================
  8. Introduction
  9. ============
  10. Ceres can solve bounds constrained robustified non-linear least
  11. squares problems of the form
  12. .. math:: :label: ceresproblem
  13. \min_{\mathbf{x}} &\quad \frac{1}{2}\sum_{i} \rho_i\left(\left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2\right) \\
  14. \text{s.t.} &\quad l_j \le x_j \le u_j
  15. Problems of this form comes up in a broad range of areas across
  16. science and engineering - from `fitting curves`_ in statistics, to
  17. constructing `3D models from photographs`_ in computer vision.
  18. .. _fitting curves: http://en.wikipedia.org/wiki/Nonlinear_regression
  19. .. _3D models from photographs: http://en.wikipedia.org/wiki/Bundle_adjustment
  20. In this chapter we will learn how to solve :eq:`ceresproblem` using
  21. Ceres Solver. Full working code for all the examples described in this
  22. chapter and more can be found in the `examples
  23. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
  24. directory.
  25. The expression
  26. :math:`\rho_i\left(\left\|f_i\left(x_{i_1},...,x_{i_k}\right)\right\|^2\right)`
  27. is known as a ``ResidualBlock``, where :math:`f_i(\cdot)` is a
  28. :class:`CostFunction` that depends on the parameter blocks
  29. :math:`\left[x_{i_1},... , x_{i_k}\right]`. In most optimization
  30. problems small groups of scalars occur together. For example the three
  31. components of a translation vector and the four components of the
  32. quaternion that define the pose of a camera. We refer to such a group
  33. of small scalars as a ``ParameterBlock``. Of course a
  34. ``ParameterBlock`` can just be a single parameter. :math:`l_j` and
  35. :math:`u_j` are bounds on the parameter block :math:`x_j`.
  36. :math:`\rho_i` is a :class:`LossFunction`. A :class:`LossFunction` is
  37. a scalar function that is used to reduce the influence of outliers on
  38. the solution of non-linear least squares problems.
  39. As a special case, when :math:`\rho_i(x) = x`, i.e., the identity
  40. function, and :math:`l_j = -\infty` and :math:`u_j = \infty` we get
  41. the more familiar `non-linear least squares problem
  42. <http://en.wikipedia.org/wiki/Non-linear_least_squares>`_.
  43. .. math:: \frac{1}{2}\sum_{i} \left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2.
  44. :label: ceresproblemnonrobust
  45. .. _section-hello-world:
  46. Hello World!
  47. ============
  48. To get started, consider the problem of finding the minimum of the
  49. function
  50. .. math:: \frac{1}{2}(10 -x)^2.
  51. This is a trivial problem, whose minimum is located at :math:`x = 10`,
  52. but it is a good place to start to illustrate the basics of solving a
  53. problem with Ceres [#f1]_.
  54. The first step is to write a functor that will evaluate this the
  55. function :math:`f(x) = 10 - x`:
  56. .. code-block:: c++
  57. struct CostFunctor {
  58. template <typename T>
  59. bool operator()(const T* const x, T* residual) const {
  60. residual[0] = 10.0 - x[0];
  61. return true;
  62. }
  63. };
  64. The important thing to note here is that ``operator()`` is a templated
  65. method, which assumes that all its inputs and outputs are of some type
  66. ``T``. The use of templating here allows Ceres to call
  67. ``CostFunctor::operator<T>()``, with ``T=double`` when just the value
  68. of the residual is needed, and with a special type ``T=Jet`` when the
  69. Jacobians are needed. In :ref:`section-derivatives` we will discuss the
  70. various ways of supplying derivatives to Ceres in more detail.
  71. Once we have a way of computing the residual function, it is now time
  72. to construct a non-linear least squares problem using it and have
  73. Ceres solve it.
  74. .. code-block:: c++
  75. int main(int argc, char** argv) {
  76. google::InitGoogleLogging(argv[0]);
  77. // The variable to solve for with its initial value.
  78. double initial_x = 5.0;
  79. double x = initial_x;
  80. // Build the problem.
  81. Problem problem;
  82. // Set up the only cost function (also known as residual). This uses
  83. // auto-differentiation to obtain the derivative (jacobian).
  84. CostFunction* cost_function =
  85. new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
  86. problem.AddResidualBlock(cost_function, nullptr, &x);
  87. // Run the solver!
  88. Solver::Options options;
  89. options.linear_solver_type = ceres::DENSE_QR;
  90. options.minimizer_progress_to_stdout = true;
  91. Solver::Summary summary;
  92. Solve(options, &problem, &summary);
  93. std::cout << summary.BriefReport() << "\n";
  94. std::cout << "x : " << initial_x
  95. << " -> " << x << "\n";
  96. return 0;
  97. }
  98. :class:`AutoDiffCostFunction` takes a ``CostFunctor`` as input,
  99. automatically differentiates it and gives it a :class:`CostFunction`
  100. interface.
  101. Compiling and running `examples/helloworld.cc
  102. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
  103. gives us
  104. .. code-block:: bash
  105. iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
  106. 0 4.512500e+01 0.00e+00 9.50e+00 0.00e+00 0.00e+00 1.00e+04 0 5.33e-04 3.46e-03
  107. 1 4.511598e-07 4.51e+01 9.50e-04 9.50e+00 1.00e+00 3.00e+04 1 5.00e-04 4.05e-03
  108. 2 5.012552e-16 4.51e-07 3.17e-08 9.50e-04 1.00e+00 9.00e+04 1 1.60e-05 4.09e-03
  109. Ceres Solver Report: Iterations: 2, Initial cost: 4.512500e+01, Final cost: 5.012552e-16, Termination: CONVERGENCE
  110. x : 0.5 -> 10
  111. Starting from a :math:`x=5`, the solver in two iterations goes to 10
  112. [#f2]_. The careful reader will note that this is a linear problem and
  113. one linear solve should be enough to get the optimal value. The
  114. default configuration of the solver is aimed at non-linear problems,
  115. and for reasons of simplicity we did not change it in this example. It
  116. is indeed possible to obtain the solution to this problem using Ceres
  117. in one iteration. Also note that the solver did get very close to the
  118. optimal function value of 0 in the very first iteration. We will
  119. discuss these issues in greater detail when we talk about convergence
  120. and parameter settings for Ceres.
  121. .. rubric:: Footnotes
  122. .. [#f1] `examples/helloworld.cc
  123. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
  124. .. [#f2] Actually the solver ran for three iterations, and it was
  125. by looking at the value returned by the linear solver in the third
  126. iteration, it observed that the update to the parameter block was too
  127. small and declared convergence. Ceres only prints out the display at
  128. the end of an iteration, and terminates as soon as it detects
  129. convergence, which is why you only see two iterations here and not
  130. three.
  131. .. _section-derivatives:
  132. Derivatives
  133. ===========
  134. Ceres Solver like most optimization packages, depends on being able to
  135. evaluate the value and the derivatives of each term in the objective
  136. function at arbitrary parameter values. Doing so correctly and
  137. efficiently is essential to getting good results. Ceres Solver
  138. provides a number of ways of doing so. You have already seen one of
  139. them in action --
  140. Automatic Differentiation in `examples/helloworld.cc
  141. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
  142. We now consider the other two possibilities. Analytic and numeric
  143. derivatives.
  144. Numeric Derivatives
  145. -------------------
  146. In some cases, its not possible to define a templated cost functor,
  147. for example when the evaluation of the residual involves a call to a
  148. library function that you do not have control over. In such a
  149. situation, numerical differentiation can be used. The user defines a
  150. functor which computes the residual value and construct a
  151. :class:`NumericDiffCostFunction` using it. e.g., for :math:`f(x) = 10 - x`
  152. the corresponding functor would be
  153. .. code-block:: c++
  154. struct NumericDiffCostFunctor {
  155. bool operator()(const double* const x, double* residual) const {
  156. residual[0] = 10.0 - x[0];
  157. return true;
  158. }
  159. };
  160. Which is added to the :class:`Problem` as:
  161. .. code-block:: c++
  162. CostFunction* cost_function =
  163. new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>(
  164. new NumericDiffCostFunctor);
  165. problem.AddResidualBlock(cost_function, nullptr, &x);
  166. Notice the parallel from when we were using automatic differentiation
  167. .. code-block:: c++
  168. CostFunction* cost_function =
  169. new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
  170. problem.AddResidualBlock(cost_function, nullptr, &x);
  171. The construction looks almost identical to the one used for automatic
  172. differentiation, except for an extra template parameter that indicates
  173. the kind of finite differencing scheme to be used for computing the
  174. numerical derivatives [#f3]_. For more details see the documentation
  175. for :class:`NumericDiffCostFunction`.
  176. **Generally speaking we recommend automatic differentiation instead of
  177. numeric differentiation. The use of C++ templates makes automatic
  178. differentiation efficient, whereas numeric differentiation is
  179. expensive, prone to numeric errors, and leads to slower convergence.**
  180. Analytic Derivatives
  181. --------------------
  182. In some cases, using automatic differentiation is not possible. For
  183. example, it may be the case that it is more efficient to compute the
  184. derivatives in closed form instead of relying on the chain rule used
  185. by the automatic differentiation code.
  186. In such cases, it is possible to supply your own residual and jacobian
  187. computation code. To do this, define a subclass of
  188. :class:`CostFunction` or :class:`SizedCostFunction` if you know the
  189. sizes of the parameters and residuals at compile time. Here for
  190. example is ``SimpleCostFunction`` that implements :math:`f(x) = 10 -
  191. x`.
  192. .. code-block:: c++
  193. class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> {
  194. public:
  195. virtual ~QuadraticCostFunction() {}
  196. virtual bool Evaluate(double const* const* parameters,
  197. double* residuals,
  198. double** jacobians) const {
  199. const double x = parameters[0][0];
  200. residuals[0] = 10 - x;
  201. // Compute the Jacobian if asked for.
  202. if (jacobians != nullptr && jacobians[0] != nullptr) {
  203. jacobians[0][0] = -1;
  204. }
  205. return true;
  206. }
  207. };
  208. ``SimpleCostFunction::Evaluate`` is provided with an input array of
  209. ``parameters``, an output array ``residuals`` for residuals and an
  210. output array ``jacobians`` for Jacobians. The ``jacobians`` array is
  211. optional, ``Evaluate`` is expected to check when it is non-null, and
  212. if it is the case then fill it with the values of the derivative of
  213. the residual function. In this case since the residual function is
  214. linear, the Jacobian is constant [#f4]_ .
  215. As can be seen from the above code fragments, implementing
  216. :class:`CostFunction` objects is a bit tedious. We recommend that
  217. unless you have a good reason to manage the jacobian computation
  218. yourself, you use :class:`AutoDiffCostFunction` or
  219. :class:`NumericDiffCostFunction` to construct your residual blocks.
  220. More About Derivatives
  221. ----------------------
  222. Computing derivatives is by far the most complicated part of using
  223. Ceres, and depending on the circumstance the user may need more
  224. sophisticated ways of computing derivatives. This section just
  225. scratches the surface of how derivatives can be supplied to
  226. Ceres. Once you are comfortable with using
  227. :class:`NumericDiffCostFunction` and :class:`AutoDiffCostFunction` we
  228. recommend taking a look at :class:`DynamicAutoDiffCostFunction`,
  229. :class:`CostFunctionToFunctor`, :class:`NumericDiffFunctor` and
  230. :class:`ConditionedCostFunction` for more advanced ways of
  231. constructing and computing cost functions.
  232. .. rubric:: Footnotes
  233. .. [#f3] `examples/helloworld_numeric_diff.cc
  234. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_numeric_diff.cc>`_.
  235. .. [#f4] `examples/helloworld_analytic_diff.cc
  236. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_analytic_diff.cc>`_.
  237. .. _section-powell:
  238. Powell's Function
  239. =================
  240. Consider now a slightly more complicated example -- the minimization
  241. of Powell's function. Let :math:`x = \left[x_1, x_2, x_3, x_4 \right]`
  242. and
  243. .. math::
  244. \begin{align}
  245. f_1(x) &= x_1 + 10x_2 \\
  246. f_2(x) &= \sqrt{5} (x_3 - x_4)\\
  247. f_3(x) &= (x_2 - 2x_3)^2\\
  248. f_4(x) &= \sqrt{10} (x_1 - x_4)^2\\
  249. F(x) &= \left[f_1(x),\ f_2(x),\ f_3(x),\ f_4(x) \right]
  250. \end{align}
  251. :math:`F(x)` is a function of four parameters, has four residuals
  252. and we wish to find :math:`x` such that :math:`\frac{1}{2}\|F(x)\|^2`
  253. is minimized.
  254. Again, the first step is to define functors that evaluate of the terms
  255. in the objective functor. Here is the code for evaluating
  256. :math:`f_4(x_1, x_4)`:
  257. .. code-block:: c++
  258. struct F4 {
  259. template <typename T>
  260. bool operator()(const T* const x1, const T* const x4, T* residual) const {
  261. residual[0] = sqrt(10.0) * (x1[0] - x4[0]) * (x1[0] - x4[0]);
  262. return true;
  263. }
  264. };
  265. Similarly, we can define classes ``F1``, ``F2`` and ``F3`` to evaluate
  266. :math:`f_1(x_1, x_2)`, :math:`f_2(x_3, x_4)` and :math:`f_3(x_2, x_3)`
  267. respectively. Using these, the problem can be constructed as follows:
  268. .. code-block:: c++
  269. double x1 = 3.0; double x2 = -1.0; double x3 = 0.0; double x4 = 1.0;
  270. Problem problem;
  271. // Add residual terms to the problem using the autodiff
  272. // wrapper to get the derivatives automatically.
  273. problem.AddResidualBlock(
  274. new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x1, &x2);
  275. problem.AddResidualBlock(
  276. new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), nullptr, &x3, &x4);
  277. problem.AddResidualBlock(
  278. new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), nullptr, &x2, &x3);
  279. problem.AddResidualBlock(
  280. new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), nullptr, &x1, &x4);
  281. Note that each ``ResidualBlock`` only depends on the two parameters
  282. that the corresponding residual object depends on and not on all four
  283. parameters. Compiling and running `examples/powell.cc
  284. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_
  285. gives us:
  286. .. code-block:: bash
  287. Initial x1 = 3, x2 = -1, x3 = 0, x4 = 1
  288. iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
  289. 0 1.075000e+02 0.00e+00 1.55e+02 0.00e+00 0.00e+00 1.00e+04 0 2.91e-05 3.40e-04
  290. 1 5.036190e+00 1.02e+02 2.00e+01 0.00e+00 9.53e-01 3.00e+04 1 4.98e-05 3.99e-04
  291. 2 3.148168e-01 4.72e+00 2.50e+00 6.23e-01 9.37e-01 9.00e+04 1 2.15e-06 4.06e-04
  292. 3 1.967760e-02 2.95e-01 3.13e-01 3.08e-01 9.37e-01 2.70e+05 1 9.54e-07 4.10e-04
  293. 4 1.229900e-03 1.84e-02 3.91e-02 1.54e-01 9.37e-01 8.10e+05 1 1.91e-06 4.14e-04
  294. 5 7.687123e-05 1.15e-03 4.89e-03 7.69e-02 9.37e-01 2.43e+06 1 1.91e-06 4.18e-04
  295. 6 4.804625e-06 7.21e-05 6.11e-04 3.85e-02 9.37e-01 7.29e+06 1 1.19e-06 4.21e-04
  296. 7 3.003028e-07 4.50e-06 7.64e-05 1.92e-02 9.37e-01 2.19e+07 1 1.91e-06 4.25e-04
  297. 8 1.877006e-08 2.82e-07 9.54e-06 9.62e-03 9.37e-01 6.56e+07 1 9.54e-07 4.28e-04
  298. 9 1.173223e-09 1.76e-08 1.19e-06 4.81e-03 9.37e-01 1.97e+08 1 9.54e-07 4.32e-04
  299. 10 7.333425e-11 1.10e-09 1.49e-07 2.40e-03 9.37e-01 5.90e+08 1 9.54e-07 4.35e-04
  300. 11 4.584044e-12 6.88e-11 1.86e-08 1.20e-03 9.37e-01 1.77e+09 1 9.54e-07 4.38e-04
  301. 12 2.865573e-13 4.30e-12 2.33e-09 6.02e-04 9.37e-01 5.31e+09 1 2.15e-06 4.42e-04
  302. 13 1.791438e-14 2.69e-13 2.91e-10 3.01e-04 9.37e-01 1.59e+10 1 1.91e-06 4.45e-04
  303. 14 1.120029e-15 1.68e-14 3.64e-11 1.51e-04 9.37e-01 4.78e+10 1 2.15e-06 4.48e-04
  304. Solver Summary (v 2.2.0-eigen-(3.4.0)-lapack-suitesparse-(7.1.0)-metis-(5.1.0)-acceleratesparse-eigensparse)
  305. Original Reduced
  306. Parameter blocks 4 4
  307. Parameters 4 4
  308. Residual blocks 4 4
  309. Residuals 4 4
  310. Minimizer TRUST_REGION
  311. Dense linear algebra library EIGEN
  312. Trust region strategy LEVENBERG_MARQUARDT
  313. Given Used
  314. Linear solver DENSE_QR DENSE_QR
  315. Threads 1 1
  316. Linear solver ordering AUTOMATIC 4
  317. Cost:
  318. Initial 1.075000e+02
  319. Final 1.120029e-15
  320. Change 1.075000e+02
  321. Minimizer iterations 15
  322. Successful steps 15
  323. Unsuccessful steps 0
  324. Time (in seconds):
  325. Preprocessor 0.000311
  326. Residual only evaluation 0.000002 (14)
  327. Jacobian & residual evaluation 0.000023 (15)
  328. Linear solver 0.000043 (14)
  329. Minimizer 0.000163
  330. Postprocessor 0.000012
  331. Total 0.000486
  332. Termination: CONVERGENCE (Gradient tolerance reached. Gradient max norm: 3.642190e-11 <= 1.000000e-10)
  333. Final x1 = 0.000146222, x2 = -1.46222e-05, x3 = 2.40957e-05, x4 = 2.40957e-05
  334. It is easy to see that the optimal solution to this problem is at
  335. :math:`x_1=0, x_2=0, x_3=0, x_4=0` with an objective function value of
  336. :math:`0`. In 10 iterations, Ceres finds a solution with an objective
  337. function value of :math:`4\times 10^{-12}`.
  338. .. rubric:: Footnotes
  339. .. [#f5] `examples/powell.cc
  340. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_.
  341. .. _section-fitting:
  342. Curve Fitting
  343. =============
  344. The examples we have seen until now are simple optimization problems
  345. with no data. The original purpose of least squares and non-linear
  346. least squares analysis was fitting curves to data. It is only
  347. appropriate that we now consider an example of such a problem
  348. [#f6]_. It contains data generated by sampling the curve :math:`y =
  349. e^{0.3x + 0.1}` and adding Gaussian noise with standard deviation
  350. :math:`\sigma = 0.2`. Let us fit some data to the curve
  351. .. math:: y = e^{mx + c}.
  352. We begin by defining a templated object to evaluate the
  353. residual. There will be a residual for each observation.
  354. .. code-block:: c++
  355. struct ExponentialResidual {
  356. ExponentialResidual(double x, double y)
  357. : x_(x), y_(y) {}
  358. template <typename T>
  359. bool operator()(const T* const m, const T* const c, T* residual) const {
  360. residual[0] = y_ - exp(m[0] * x_ + c[0]);
  361. return true;
  362. }
  363. private:
  364. // Observations for a sample.
  365. const double x_;
  366. const double y_;
  367. };
  368. Assuming the observations are in a :math:`2n` sized array called
  369. ``data`` the problem construction is a simple matter of creating a
  370. :class:`CostFunction` for every observation.
  371. .. code-block:: c++
  372. double m = 0.0;
  373. double c = 0.0;
  374. Problem problem;
  375. for (int i = 0; i < kNumObservations; ++i) {
  376. CostFunction* cost_function =
  377. new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
  378. new ExponentialResidual(data[2 * i], data[2 * i + 1]));
  379. problem.AddResidualBlock(cost_function, nullptr, &m, &c);
  380. }
  381. Compiling and running `examples/curve_fitting.cc
  382. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
  383. gives us:
  384. .. code-block:: bash
  385. iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
  386. 0 1.211734e+02 0.00e+00 3.61e+02 0.00e+00 0.00e+00 1.00e+04 0 5.34e-04 2.56e-03
  387. 1 1.211734e+02 -2.21e+03 0.00e+00 7.52e-01 -1.87e+01 5.00e+03 1 4.29e-05 3.25e-03
  388. 2 1.211734e+02 -2.21e+03 0.00e+00 7.51e-01 -1.86e+01 1.25e+03 1 1.10e-05 3.28e-03
  389. 3 1.211734e+02 -2.19e+03 0.00e+00 7.48e-01 -1.85e+01 1.56e+02 1 1.41e-05 3.31e-03
  390. 4 1.211734e+02 -2.02e+03 0.00e+00 7.22e-01 -1.70e+01 9.77e+00 1 1.00e-05 3.34e-03
  391. 5 1.211734e+02 -7.34e+02 0.00e+00 5.78e-01 -6.32e+00 3.05e-01 1 1.00e-05 3.36e-03
  392. 6 3.306595e+01 8.81e+01 4.10e+02 3.18e-01 1.37e+00 9.16e-01 1 2.79e-05 3.41e-03
  393. 7 6.426770e+00 2.66e+01 1.81e+02 1.29e-01 1.10e+00 2.75e+00 1 2.10e-05 3.45e-03
  394. 8 3.344546e+00 3.08e+00 5.51e+01 3.05e-02 1.03e+00 8.24e+00 1 2.10e-05 3.48e-03
  395. 9 1.987485e+00 1.36e+00 2.33e+01 8.87e-02 9.94e-01 2.47e+01 1 2.10e-05 3.52e-03
  396. 10 1.211585e+00 7.76e-01 8.22e+00 1.05e-01 9.89e-01 7.42e+01 1 2.10e-05 3.56e-03
  397. 11 1.063265e+00 1.48e-01 1.44e+00 6.06e-02 9.97e-01 2.22e+02 1 2.60e-05 3.61e-03
  398. 12 1.056795e+00 6.47e-03 1.18e-01 1.47e-02 1.00e+00 6.67e+02 1 2.10e-05 3.64e-03
  399. 13 1.056751e+00 4.39e-05 3.79e-03 1.28e-03 1.00e+00 2.00e+03 1 2.10e-05 3.68e-03
  400. Ceres Solver Report: Iterations: 13, Initial cost: 1.211734e+02, Final cost: 1.056751e+00, Termination: CONVERGENCE
  401. Initial m: 0 c: 0
  402. Final m: 0.291861 c: 0.131439
  403. Starting from parameter values :math:`m = 0, c=0` with an initial
  404. objective function value of :math:`121.173` Ceres finds a solution
  405. :math:`m= 0.291861, c = 0.131439` with an objective function value of
  406. :math:`1.05675`. These values are a bit different than the
  407. parameters of the original model :math:`m=0.3, c= 0.1`, but this is
  408. expected. When reconstructing a curve from noisy data, we expect to
  409. see such deviations. Indeed, if you were to evaluate the objective
  410. function for :math:`m=0.3, c=0.1`, the fit is worse with an objective
  411. function value of :math:`1.082425`. The figure below illustrates the fit.
  412. .. figure:: least_squares_fit.png
  413. :figwidth: 500px
  414. :height: 400px
  415. :align: center
  416. Least squares curve fitting.
  417. .. rubric:: Footnotes
  418. .. [#f6] `examples/curve_fitting.cc
  419. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
  420. Robust Curve Fitting
  421. =====================
  422. Now suppose the data we are given has some outliers, i.e., we have
  423. some points that do not obey the noise model. If we were to use the
  424. code above to fit such data, we would get a fit that looks as
  425. below. Notice how the fitted curve deviates from the ground truth.
  426. .. figure:: non_robust_least_squares_fit.png
  427. :figwidth: 500px
  428. :height: 400px
  429. :align: center
  430. To deal with outliers, a standard technique is to use a
  431. :class:`LossFunction`. Loss functions reduce the influence of
  432. residual blocks with high residuals, usually the ones corresponding to
  433. outliers. To associate a loss function with a residual block, we change
  434. .. code-block:: c++
  435. problem.AddResidualBlock(cost_function, nullptr , &m, &c);
  436. to
  437. .. code-block:: c++
  438. problem.AddResidualBlock(cost_function, new CauchyLoss(0.5) , &m, &c);
  439. :class:`CauchyLoss` is one of the loss functions that ships with Ceres
  440. Solver. The argument :math:`0.5` specifies the scale of the loss
  441. function. As a result, we get the fit below [#f7]_. Notice how the
  442. fitted curve moves back closer to the ground truth curve.
  443. .. figure:: robust_least_squares_fit.png
  444. :figwidth: 500px
  445. :height: 400px
  446. :align: center
  447. Using :class:`LossFunction` to reduce the effect of outliers on a
  448. least squares fit.
  449. .. rubric:: Footnotes
  450. .. [#f7] `examples/robust_curve_fitting.cc
  451. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robust_curve_fitting.cc>`_
  452. Bundle Adjustment
  453. =================
  454. One of the main reasons for writing Ceres was our need to solve large
  455. scale bundle adjustment problems [HartleyZisserman]_, [Triggs]_.
  456. Given a set of measured image feature locations and correspondences,
  457. the goal of bundle adjustment is to find 3D point positions and camera
  458. parameters that minimize the reprojection error. This optimization
  459. problem is usually formulated as a non-linear least squares problem,
  460. where the error is the squared :math:`L_2` norm of the difference between
  461. the observed feature location and the projection of the corresponding
  462. 3D point on the image plane of the camera. Ceres has extensive support
  463. for solving bundle adjustment problems.
  464. Let us solve a problem from the `BAL
  465. <http://grail.cs.washington.edu/projects/bal/>`_ dataset [#f8]_.
  466. The first step as usual is to define a templated functor that computes
  467. the reprojection error/residual. The structure of the functor is
  468. similar to the ``ExponentialResidual``, in that there is an
  469. instance of this object responsible for each image observation.
  470. Each residual in a BAL problem depends on a three dimensional point
  471. and a nine parameter camera. The nine parameters defining the camera
  472. are: three for rotation as a Rodrigues' axis-angle vector, three
  473. for translation, one for focal length and two for radial distortion.
  474. The details of this camera model can be found the `Bundler homepage
  475. <http://phototour.cs.washington.edu/bundler/>`_ and the `BAL homepage
  476. <http://grail.cs.washington.edu/projects/bal/>`_.
  477. .. code-block:: c++
  478. struct SnavelyReprojectionError {
  479. SnavelyReprojectionError(double observed_x, double observed_y)
  480. : observed_x(observed_x), observed_y(observed_y) {}
  481. template <typename T>
  482. bool operator()(const T* const camera,
  483. const T* const point,
  484. T* residuals) const {
  485. // camera[0,1,2] are the angle-axis rotation.
  486. T p[3];
  487. ceres::AngleAxisRotatePoint(camera, point, p);
  488. // camera[3,4,5] are the translation.
  489. p[0] += camera[3]; p[1] += camera[4]; p[2] += camera[5];
  490. // Compute the center of distortion. The sign change comes from
  491. // the camera model that Noah Snavely's Bundler assumes, whereby
  492. // the camera coordinate system has a negative z axis.
  493. T xp = - p[0] / p[2];
  494. T yp = - p[1] / p[2];
  495. // Apply second and fourth order radial distortion.
  496. const T& l1 = camera[7];
  497. const T& l2 = camera[8];
  498. T r2 = xp*xp + yp*yp;
  499. T distortion = 1.0 + r2 * (l1 + l2 * r2);
  500. // Compute final projected point position.
  501. const T& focal = camera[6];
  502. T predicted_x = focal * distortion * xp;
  503. T predicted_y = focal * distortion * yp;
  504. // The error is the difference between the predicted and observed position.
  505. residuals[0] = predicted_x - T(observed_x);
  506. residuals[1] = predicted_y - T(observed_y);
  507. return true;
  508. }
  509. // Factory to hide the construction of the CostFunction object from
  510. // the client code.
  511. static ceres::CostFunction* Create(const double observed_x,
  512. const double observed_y) {
  513. return (new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
  514. new SnavelyReprojectionError(observed_x, observed_y)));
  515. }
  516. double observed_x;
  517. double observed_y;
  518. };
  519. Note that unlike the examples before, this is a non-trivial function
  520. and computing its analytic Jacobian is a bit of a pain. Automatic
  521. differentiation makes life much simpler. The function
  522. :func:`AngleAxisRotatePoint` and other functions for manipulating
  523. rotations can be found in ``include/ceres/rotation.h``.
  524. Given this functor, the bundle adjustment problem can be constructed
  525. as follows:
  526. .. code-block:: c++
  527. ceres::Problem problem;
  528. for (int i = 0; i < bal_problem.num_observations(); ++i) {
  529. ceres::CostFunction* cost_function =
  530. SnavelyReprojectionError::Create(
  531. bal_problem.observations()[2 * i + 0],
  532. bal_problem.observations()[2 * i + 1]);
  533. problem.AddResidualBlock(cost_function,
  534. nullptr /* squared loss */,
  535. bal_problem.mutable_camera_for_observation(i),
  536. bal_problem.mutable_point_for_observation(i));
  537. }
  538. Notice that the problem construction for bundle adjustment is very
  539. similar to the curve fitting example -- one term is added to the
  540. objective function per observation.
  541. Since this is a large sparse problem (well large for ``DENSE_QR``
  542. anyways), one way to solve this problem is to set
  543. :member:`Solver::Options::linear_solver_type` to
  544. ``SPARSE_NORMAL_CHOLESKY`` and call :func:`Solve`. And while this is
  545. a reasonable thing to do, bundle adjustment problems have a special
  546. sparsity structure that can be exploited to solve them much more
  547. efficiently. Ceres provides three specialized solvers (collectively
  548. known as Schur-based solvers) for this task. The example code uses the
  549. simplest of them ``DENSE_SCHUR``.
  550. .. code-block:: c++
  551. ceres::Solver::Options options;
  552. options.linear_solver_type = ceres::DENSE_SCHUR;
  553. options.minimizer_progress_to_stdout = true;
  554. ceres::Solver::Summary summary;
  555. ceres::Solve(options, &problem, &summary);
  556. std::cout << summary.FullReport() << "\n";
  557. For a more sophisticated bundle adjustment example which demonstrates
  558. the use of Ceres' more advanced features including its various linear
  559. solvers, robust loss functions and manifolds see
  560. `examples/bundle_adjuster.cc
  561. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
  562. .. rubric:: Footnotes
  563. .. [#f8] `examples/simple_bundle_adjuster.cc
  564. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/simple_bundle_adjuster.cc>`_
  565. Other Examples
  566. ==============
  567. Besides the examples in this chapter, the `example
  568. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
  569. directory contains a number of other examples:
  570. #. `bundle_adjuster.cc
  571. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
  572. shows how to use the various features of Ceres to solve bundle
  573. adjustment problems.
  574. #. `circle_fit.cc
  575. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/circle_fit.cc>`_
  576. shows how to fit data to a circle.
  577. #. `ellipse_approximation.cc
  578. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/ellipse_approximation.cc>`_
  579. fits points randomly distributed on an ellipse with an approximate
  580. line segment contour. This is done by jointly optimizing the
  581. control points of the line segment contour along with the preimage
  582. positions for the data points. The purpose of this example is to
  583. show an example use case for ``Solver::Options::dynamic_sparsity``,
  584. and how it can benefit problems which are numerically dense but
  585. dynamically sparse.
  586. #. `denoising.cc
  587. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/denoising.cc>`_
  588. implements image denoising using the `Fields of Experts
  589. <http://www.gris.informatik.tu-darmstadt.de/~sroth/research/foe/index.html>`_
  590. model.
  591. #. `nist.cc
  592. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/nist.cc>`_
  593. implements and attempts to solves the `NIST
  594. <http://www.itl.nist.gov/div898/strd/nls/nls_main.shtml>`_
  595. non-linear regression problems.
  596. #. `more_garbow_hillstrom.cc
  597. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/more_garbow_hillstrom.cc>`_
  598. A subset of the test problems from the paper
  599. Testing Unconstrained Optimization Software
  600. Jorge J. More, Burton S. Garbow and Kenneth E. Hillstrom
  601. ACM Transactions on Mathematical Software, 7(1), pp. 17-41, 1981
  602. which were augmented with bounds and used for testing bounds
  603. constrained optimization algorithms by
  604. A Trust Region Approach to Linearly Constrained Optimization
  605. David M. Gay
  606. Numerical Analysis (Griffiths, D.F., ed.), pp. 72-105
  607. Lecture Notes in Mathematics 1066, Springer Verlag, 1984.
  608. #. `libmv_bundle_adjuster.cc
  609. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_bundle_adjuster.cc>`_
  610. is the bundle adjustment algorithm used by `Blender <www.blender.org>`_/libmv.
  611. #. `libmv_homography.cc
  612. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_homography.cc>`_
  613. This file demonstrates solving for a homography between two sets of
  614. points and using a custom exit criterion by having a callback check
  615. for image-space error.
  616. #. `robot_pose_mle.cc
  617. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robot_pose_mle.cc>`_
  618. This example demonstrates how to use the ``DynamicAutoDiffCostFunction``
  619. variant of CostFunction. The ``DynamicAutoDiffCostFunction`` is meant to
  620. be used in cases where the number of parameter blocks or the sizes are not
  621. known at compile time.
  622. This example simulates a robot traversing down a 1-dimension hallway with
  623. noise odometry readings and noisy range readings of the end of the hallway.
  624. By fusing the noisy odometry and sensor readings this example demonstrates
  625. how to compute the maximum likelihood estimate (MLE) of the robot's pose at
  626. each timestep.
  627. #. `slam/pose_graph_2d/pose_graph_2d.cc
  628. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/slam/pose_graph_2d/pose_graph_2d.cc>`_
  629. The Simultaneous Localization and Mapping (SLAM) problem consists of building
  630. a map of an unknown environment while simultaneously localizing against this
  631. map. The main difficulty of this problem stems from not having any additional
  632. external aiding information such as GPS. SLAM has been considered one of the
  633. fundamental challenges of robotics. There are many resources on SLAM
  634. [#f9]_. A pose graph optimization problem is one example of a SLAM
  635. problem. The following explains how to formulate the pose graph based SLAM
  636. problem in 2-Dimensions with relative pose constraints.
  637. Consider a robot moving in a 2-Dimensional plane. The robot has access to a
  638. set of sensors such as wheel odometry or a laser range scanner. From these
  639. raw measurements, we want to estimate the trajectory of the robot as well as
  640. build a map of the environment. In order to reduce the computational
  641. complexity of the problem, the pose graph approach abstracts the raw
  642. measurements away. Specifically, it creates a graph of nodes which represent
  643. the pose of the robot, and edges which represent the relative transformation
  644. (delta position and orientation) between the two nodes. The edges are virtual
  645. measurements derived from the raw sensor measurements, e.g. by integrating
  646. the raw wheel odometry or aligning the laser range scans acquired from the
  647. robot. A visualization of the resulting graph is shown below.
  648. .. figure:: slam2d.png
  649. :figwidth: 500px
  650. :height: 400px
  651. :align: center
  652. Visual representation of a graph SLAM problem.
  653. The figure depicts the pose of the robot as the triangles, the measurements
  654. are indicated by the connecting lines, and the loop closure measurements are
  655. shown as dotted lines. Loop closures are measurements between non-sequential
  656. robot states and they reduce the accumulation of error over time. The
  657. following will describe the mathematical formulation of the pose graph
  658. problem.
  659. The robot at timestamp :math:`t` has state :math:`x_t = [p^T, \psi]^T` where
  660. :math:`p` is a 2D vector that represents the position in the plane and
  661. :math:`\psi` is the orientation in radians. The measurement of the relative
  662. transform between the robot state at two timestamps :math:`a` and :math:`b`
  663. is given as: :math:`z_{ab} = [\hat{p}_{ab}^T, \hat{\psi}_{ab}]`. The residual
  664. implemented in the Ceres cost function which computes the error between the
  665. measurement and the predicted measurement is:
  666. .. math:: r_{ab} =
  667. \left[
  668. \begin{array}{c}
  669. R_a^T\left(p_b - p_a\right) - \hat{p}_{ab} \\
  670. \mathrm{Normalize}\left(\psi_b - \psi_a - \hat{\psi}_{ab}\right)
  671. \end{array}
  672. \right]
  673. where the function :math:`\mathrm{Normalize}()` normalizes the angle in the range
  674. :math:`[-\pi,\pi)`, and :math:`R` is the rotation matrix given by
  675. .. math:: R_a =
  676. \left[
  677. \begin{array}{cc}
  678. \cos \psi_a & -\sin \psi_a \\
  679. \sin \psi_a & \cos \psi_a \\
  680. \end{array}
  681. \right]
  682. To finish the cost function, we need to weight the residual by the
  683. uncertainty of the measurement. Hence, we pre-multiply the residual by the
  684. inverse square root of the covariance matrix for the measurement,
  685. i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
  686. the covariance.
  687. Lastly, we use a manifold to normalize the orientation in the range
  688. :math:`[-\pi,\pi)`. Specially, we define the
  689. :member:`AngleManifold::Plus()` function to be:
  690. :math:`\mathrm{Normalize}(\psi + \Delta)` and
  691. :member:`AngleManifold::Minus()` function to be
  692. :math:`\mathrm{Normalize}(y) - \mathrm{Normalize}(x)`.
  693. This package includes an executable :member:`pose_graph_2d` that will read a
  694. problem definition file. This executable can work with any 2D problem
  695. definition that uses the g2o format. It would be relatively straightforward
  696. to implement a new reader for a different format such as TORO or
  697. others. :member:`pose_graph_2d` will print the Ceres solver full summary and
  698. then output to disk the original and optimized poses (``poses_original.txt``
  699. and ``poses_optimized.txt``, respectively) of the robot in the following
  700. format:
  701. .. code-block:: bash
  702. pose_id x y yaw_radians
  703. pose_id x y yaw_radians
  704. pose_id x y yaw_radians
  705. where ``pose_id`` is the corresponding integer ID from the file
  706. definition. Note, the file will be sorted in ascending order for the
  707. ``pose_id``.
  708. The executable :member:`pose_graph_2d` expects the first argument to be
  709. the path to the problem definition. To run the executable,
  710. .. code-block:: bash
  711. /path/to/bin/pose_graph_2d /path/to/dataset/dataset.g2o
  712. A python script is provided to visualize the resulting output files.
  713. .. code-block:: bash
  714. /path/to/repo/examples/slam/pose_graph_2d/plot_results.py --optimized_poses ./poses_optimized.txt --initial_poses ./poses_original.txt
  715. As an example, a standard synthetic benchmark dataset [#f10]_ created by
  716. Edwin Olson which has 3500 nodes in a grid world with a total of 5598 edges
  717. was solved. Visualizing the results with the provided script produces:
  718. .. figure:: manhattan_olson_3500_result.png
  719. :figwidth: 600px
  720. :height: 600px
  721. :align: center
  722. with the original poses in green and the optimized poses in blue. As shown,
  723. the optimized poses more closely match the underlying grid world. Note, the
  724. left side of the graph has a small yaw drift due to a lack of relative
  725. constraints to provide enough information to reconstruct the trajectory.
  726. .. rubric:: Footnotes
  727. .. [#f9] Giorgio Grisetti, Rainer Kummerle, Cyrill Stachniss, Wolfram
  728. Burgard. A Tutorial on Graph-Based SLAM. IEEE Intelligent Transportation
  729. Systems Magazine, 52(3):199-222, 2010.
  730. .. [#f10] E. Olson, J. Leonard, and S. Teller, “Fast iterative optimization of
  731. pose graphs with poor initial estimates,” in Robotics and Automation
  732. (ICRA), IEEE International Conference on, 2006, pp. 2262-2269.
  733. #. `slam/pose_graph_3d/pose_graph_3d.cc
  734. <https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/slam/pose_graph_3d/pose_graph_3d.cc>`_
  735. The following explains how to formulate the pose graph based SLAM problem in
  736. 3-Dimensions with relative pose constraints. The example also illustrates how
  737. to use Eigen's geometry module with Ceres's automatic differentiation
  738. functionality.
  739. The robot at timestamp :math:`t` has state :math:`x_t = [p^T, q^T]^T` where
  740. :math:`p` is a 3D vector that represents the position and :math:`q` is the
  741. orientation represented as an Eigen quaternion. The measurement of the
  742. relative transform between the robot state at two timestamps :math:`a` and
  743. :math:`b` is given as: :math:`z_{ab} = [\hat{p}_{ab}^T, \hat{q}_{ab}^T]^T`.
  744. The residual implemented in the Ceres cost function which computes the error
  745. between the measurement and the predicted measurement is:
  746. .. math:: r_{ab} =
  747. \left[
  748. \begin{array}{c}
  749. R(q_a)^{T} (p_b - p_a) - \hat{p}_{ab} \\
  750. 2.0 \mathrm{vec}\left((q_a^{-1} q_b) \hat{q}_{ab}^{-1}\right)
  751. \end{array}
  752. \right]
  753. where the function :math:`\mathrm{vec}()` returns the vector part of the
  754. quaternion, i.e. :math:`[q_x, q_y, q_z]`, and :math:`R(q)` is the rotation
  755. matrix for the quaternion.
  756. To finish the cost function, we need to weight the residual by the
  757. uncertainty of the measurement. Hence, we pre-multiply the residual by the
  758. inverse square root of the covariance matrix for the measurement,
  759. i.e. :math:`\Sigma_{ab}^{-\frac{1}{2}} r_{ab}` where :math:`\Sigma_{ab}` is
  760. the covariance.
  761. Given that we are using a quaternion to represent the orientation,
  762. we need to use a manifold (:class:`EigenQuaternionManifold`) to
  763. only apply updates orthogonal to the 4-vector defining the
  764. quaternion. Eigen's quaternion uses a different internal memory
  765. layout for the elements of the quaternion than what is commonly
  766. used. Specifically, Eigen stores the elements in memory as
  767. :math:`[x, y, z, w]` where the real part is last whereas it is
  768. typically stored first. Note, when creating an Eigen quaternion
  769. through the constructor the elements are accepted in :math:`w`,
  770. :math:`x`, :math:`y`, :math:`z` order. Since Ceres operates on
  771. parameter blocks which are raw double pointers this difference is
  772. important and requires a different parameterization.
  773. This package includes an executable :member:`pose_graph_3d` that will read a
  774. problem definition file. This executable can work with any 3D problem
  775. definition that uses the g2o format with quaternions used for the orientation
  776. representation. It would be relatively straightforward to implement a new
  777. reader for a different format such as TORO or others. :member:`pose_graph_3d`
  778. will print the Ceres solver full summary and then output to disk the original
  779. and optimized poses (``poses_original.txt`` and ``poses_optimized.txt``,
  780. respectively) of the robot in the following format:
  781. .. code-block:: bash
  782. pose_id x y z q_x q_y q_z q_w
  783. pose_id x y z q_x q_y q_z q_w
  784. pose_id x y z q_x q_y q_z q_w
  785. ...
  786. where ``pose_id`` is the corresponding integer ID from the file
  787. definition. Note, the file will be sorted in ascending order for the
  788. ``pose_id``.
  789. The executable :member:`pose_graph_3d` expects the first argument to be the
  790. path to the problem definition. The executable can be run via
  791. .. code-block:: bash
  792. /path/to/bin/pose_graph_3d /path/to/dataset/dataset.g2o
  793. A script is provided to visualize the resulting output files. There is also
  794. an option to enable equal axes using ``--axes_equal``
  795. .. code-block:: bash
  796. /path/to/repo/examples/slam/pose_graph_3d/plot_results.py --optimized_poses ./poses_optimized.txt --initial_poses ./poses_original.txt
  797. As an example, a standard synthetic benchmark dataset [#f9]_ where the robot is
  798. traveling on the surface of a sphere which has 2500 nodes with a total of
  799. 4949 edges was solved. Visualizing the results with the provided script
  800. produces:
  801. .. figure:: pose_graph_3d_ex.png
  802. :figwidth: 600px
  803. :height: 300px
  804. :align: center