nnls_covariance.rst 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394
  1. .. highlight:: c++
  2. .. default-domain:: cpp
  3. .. cpp:namespace:: ceres
  4. .. _chapter-nnls_covariance:
  5. =====================
  6. Covariance Estimation
  7. =====================
  8. Introduction
  9. ============
  10. One way to assess the quality of the solution returned by a non-linear
  11. least squares solver is to analyze the covariance of the solution.
  12. Let us consider the non-linear regression problem
  13. .. math:: y = f(x) + N(0, I)
  14. i.e., the observation :math:`y` is a random non-linear function of the
  15. independent variable :math:`x` with mean :math:`f(x)` and identity
  16. covariance. Then the maximum likelihood estimate of :math:`x` given
  17. observations :math:`y` is the solution to the non-linear least squares
  18. problem:
  19. .. math:: x^* = \arg \min_x \|f(x) - y\|^2
  20. And the covariance of :math:`x^*` is given by
  21. .. math:: C(x^*) = \left(J'(x^*)J(x^*)\right)^{-1}
  22. Here :math:`J(x^*)` is the Jacobian of :math:`f` at :math:`x^*`. The
  23. above formula assumes that :math:`J(x^*)` has full column rank.
  24. If :math:`J(x^*)` is rank deficient, then the covariance matrix :math:`C(x^*)`
  25. is also rank deficient and is given by the Moore-Penrose pseudo inverse.
  26. .. math:: C(x^*) = \left(J'(x^*)J(x^*)\right)^{\dagger}
  27. Note that in the above, we assumed that the covariance matrix for
  28. :math:`y` was identity. This is an important assumption. If this is
  29. not the case and we have
  30. .. math:: y = f(x) + N(0, S)
  31. Where :math:`S` is a positive semi-definite matrix denoting the
  32. covariance of :math:`y`, then the maximum likelihood problem to be
  33. solved is
  34. o.. math:: x^* = \arg \min_x f'(x) S^{-1} f(x)
  35. and the corresponding covariance estimate of :math:`x^*` is given by
  36. .. math:: C(x^*) = \left(J'(x^*) S^{-1} J(x^*)\right)^{-1}
  37. So, if it is the case that the observations being fitted to have a
  38. covariance matrix not equal to identity, then it is the user's
  39. responsibility that the corresponding cost functions are correctly
  40. scaled, e.g. in the above case the cost function for this problem
  41. should evaluate :math:`S^{-1/2} f(x)` instead of just :math:`f(x)`,
  42. where :math:`S^{-1/2}` is the inverse square root of the covariance
  43. matrix :math:`S`.
  44. Gauge Invariance
  45. ================
  46. In structure from motion (3D reconstruction) problems, the
  47. reconstruction is ambiguous up to a similarity transform. This is
  48. known as a *Gauge Ambiguity*. Handling Gauges correctly requires the
  49. use of SVD or custom inversion algorithms. For small problems the
  50. user can use the dense algorithm. For more details see the work of
  51. Kanatani & Morris [KanataniMorris]_.
  52. :class:`Covariance`
  53. ===================
  54. :class:`Covariance` allows the user to evaluate the covariance for a
  55. non-linear least squares problem and provides random access to its
  56. blocks. The computation assumes that the cost functions compute
  57. residuals such that their covariance is identity.
  58. Since the computation of the covariance matrix requires computing the
  59. inverse of a potentially large matrix, this can involve a rather large
  60. amount of time and memory. However, it is usually the case that the
  61. user is only interested in a small part of the covariance
  62. matrix. Quite often just the block diagonal. :class:`Covariance`
  63. allows the user to specify the parts of the covariance matrix that she
  64. is interested in and then uses this information to only compute and
  65. store those parts of the covariance matrix.
  66. Rank of the Jacobian
  67. ====================
  68. As we noted above, if the Jacobian is rank deficient, then the inverse
  69. of :math:`J'J` is not defined and instead a pseudo inverse needs to be
  70. computed.
  71. The rank deficiency in :math:`J` can be *structural* -- columns
  72. which are always known to be zero or *numerical* -- depending on the
  73. exact values in the Jacobian.
  74. Structural rank deficiency occurs when the problem contains parameter
  75. blocks that are constant. This class correctly handles structural rank
  76. deficiency like that.
  77. Numerical rank deficiency, where the rank of the matrix cannot be
  78. predicted by its sparsity structure and requires looking at its
  79. numerical values is more complicated. Here again there are two
  80. cases.
  81. a. The rank deficiency arises from overparameterization. e.g., a
  82. four dimensional quaternion used to parameterize :math:`SO(3)`,
  83. which is a three dimensional manifold. In cases like this, the
  84. user should use an appropriate
  85. :class:`Manifold`. Not only will this lead to better
  86. numerical behaviour of the Solver, it will also expose the rank
  87. deficiency to the :class:`Covariance` object so that it can
  88. handle it correctly.
  89. b. More general numerical rank deficiency in the Jacobian requires
  90. the computation of the so called Singular Value Decomposition
  91. (SVD) of :math:`J'J`. We do not know how to do this for large
  92. sparse matrices efficiently. For small and moderate sized
  93. problems this is done using dense linear algebra.
  94. :class:`Covariance::Options`
  95. .. class:: Covariance::Options
  96. .. member:: int Covariance::Options::num_threads
  97. Default: ``1``
  98. Number of threads to be used for evaluating the Jacobian and
  99. estimation of covariance.
  100. .. member:: SparseLinearAlgebraLibraryType Covariance::Options::sparse_linear_algebra_library_type
  101. Default: ``SUITE_SPARSE`` Ceres Solver is built with support for
  102. `SuiteSparse <http://faculty.cse.tamu.edu/davis/suitesparse.html>`_
  103. and ``EIGEN_SPARSE`` otherwise. Note that ``EIGEN_SPARSE`` is
  104. always available.
  105. .. member:: CovarianceAlgorithmType Covariance::Options::algorithm_type
  106. Default: ``SPARSE_QR``
  107. Ceres supports two different algorithms for covariance estimation,
  108. which represent different tradeoffs in speed, accuracy and
  109. reliability.
  110. 1. ``SPARSE_QR`` uses the sparse QR factorization algorithm to
  111. compute the decomposition
  112. .. math::
  113. QR &= J\\
  114. \left(J^\top J\right)^{-1} &= \left(R^\top R\right)^{-1}
  115. The speed of this algorithm depends on the sparse linear algebra
  116. library being used. ``Eigen``'s sparse QR factorization is a
  117. moderately fast algorithm suitable for small to medium sized
  118. matrices. For best performance we recommend using
  119. ``SuiteSparseQR`` which is enabled by setting
  120. :member:`Covariance::Options::sparse_linear_algebra_library_type`
  121. to ``SUITE_SPARSE``.
  122. ``SPARSE_QR`` cannot compute the covariance if the
  123. Jacobian is rank deficient.
  124. 2. ``DENSE_SVD`` uses ``Eigen``'s ``JacobiSVD`` to perform the
  125. computations. It computes the singular value decomposition
  126. .. math:: U D V^\top = J
  127. and then uses it to compute the pseudo inverse of J'J as
  128. .. math:: (J'J)^{\dagger} = V D^{2\dagger} V^\top
  129. It is an accurate but slow method and should only be used for
  130. small to moderate sized problems. It can handle full-rank as
  131. well as rank deficient Jacobians.
  132. .. member:: double Covariance::Options::column_pivot_threshold
  133. Default: :math:`-1`
  134. During QR factorization, if a column with Euclidean norm less than
  135. ``column_pivot_threshold`` is encountered it is treated as zero.
  136. If ``column_pivot_threshold < 0``, then an automatic default value
  137. of `20*(m+n)*eps*sqrt(max(diag(J’*J)))` is used. Here `m` and `n`
  138. are the number of rows and columns of the Jacobian (`J`)
  139. respectively.
  140. This is an advanced option meant for users who know enough about
  141. their Jacobian matrices that they can determine a value better
  142. than the default.
  143. .. member:: int Covariance::Options::min_reciprocal_condition_number
  144. Default: :math:`10^{-14}`
  145. If the Jacobian matrix is near singular, then inverting :math:`J'J`
  146. will result in unreliable results, e.g, if
  147. .. math::
  148. J = \begin{bmatrix}
  149. 1.0& 1.0 \\
  150. 1.0& 1.0000001
  151. \end{bmatrix}
  152. which is essentially a rank deficient matrix, we have
  153. .. math::
  154. (J'J)^{-1} = \begin{bmatrix}
  155. 2.0471e+14& -2.0471e+14 \\
  156. -2.0471e+14& 2.0471e+14
  157. \end{bmatrix}
  158. This is not a useful result. Therefore, by default
  159. :func:`Covariance::Compute` will return ``false`` if a rank
  160. deficient Jacobian is encountered. How rank deficiency is detected
  161. depends on the algorithm being used.
  162. 1. ``DENSE_SVD``
  163. .. math:: \frac{\sigma_{\text{min}}}{\sigma_{\text{max}}} < \sqrt{\text{min_reciprocal_condition_number}}
  164. where :math:`\sigma_{\text{min}}` and
  165. :math:`\sigma_{\text{max}}` are the minimum and maximum
  166. singular values of :math:`J` respectively.
  167. 2. ``SPARSE_QR``
  168. .. math:: \operatorname{rank}(J) < \operatorname{num\_col}(J)
  169. Here :math:`\operatorname{rank}(J)` is the estimate of the rank
  170. of :math:`J` returned by the sparse QR factorization
  171. algorithm. It is a fairly reliable indication of rank
  172. deficiency.
  173. .. member:: int Covariance::Options::null_space_rank
  174. When using ``DENSE_SVD``, the user has more control in dealing
  175. with singular and near singular covariance matrices.
  176. As mentioned above, when the covariance matrix is near singular,
  177. instead of computing the inverse of :math:`J'J`, the Moore-Penrose
  178. pseudoinverse of :math:`J'J` should be computed.
  179. If :math:`J'J` has the eigen decomposition :math:`(\lambda_i,
  180. e_i)`, where :math:`\lambda_i` is the :math:`i^\textrm{th}`
  181. eigenvalue and :math:`e_i` is the corresponding eigenvector, then
  182. the inverse of :math:`J'J` is
  183. .. math:: (J'J)^{-1} = \sum_i \frac{1}{\lambda_i} e_i e_i'
  184. and computing the pseudo inverse involves dropping terms from this
  185. sum that correspond to small eigenvalues.
  186. How terms are dropped is controlled by
  187. `min_reciprocal_condition_number` and `null_space_rank`.
  188. If `null_space_rank` is non-negative, then the smallest
  189. `null_space_rank` eigenvalue/eigenvectors are dropped irrespective
  190. of the magnitude of :math:`\lambda_i`. If the ratio of the
  191. smallest non-zero eigenvalue to the largest eigenvalue in the
  192. truncated matrix is still below min_reciprocal_condition_number,
  193. then the `Covariance::Compute()` will fail and return `false`.
  194. Setting `null_space_rank = -1` drops all terms for which
  195. .. math:: \frac{\lambda_i}{\lambda_{\textrm{max}}} < \textrm{min_reciprocal_condition_number}
  196. This option has no effect on ``SPARSE_QR``.
  197. .. member:: bool Covariance::Options::apply_loss_function
  198. Default: `true`
  199. Even though the residual blocks in the problem may contain loss
  200. functions, setting ``apply_loss_function`` to false will turn off
  201. the application of the loss function to the output of the cost
  202. function and in turn its effect on the covariance.
  203. .. class:: Covariance
  204. :class:`Covariance::Options` as the name implies is used to control
  205. the covariance estimation algorithm. Covariance estimation is a
  206. complicated and numerically sensitive procedure. Please read the
  207. entire documentation for :class:`Covariance::Options` before using
  208. :class:`Covariance`.
  209. .. function:: bool Covariance::Compute(const std::vector<std::pair<const double*, const double*> >& covariance_blocks, Problem* problem)
  210. Compute a part of the covariance matrix.
  211. The vector ``covariance_blocks``, indexes into the covariance
  212. matrix block-wise using pairs of parameter blocks. This allows the
  213. covariance estimation algorithm to only compute and store these
  214. blocks.
  215. Since the covariance matrix is symmetric, if the user passes
  216. ``<block1, block2>``, then ``GetCovarianceBlock`` can be called with
  217. ``block1``, ``block2`` as well as ``block2``, ``block1``.
  218. ``covariance_blocks`` cannot contain duplicates. Bad things will
  219. happen if they do.
  220. Note that the list of ``covariance_blocks`` is only used to
  221. determine what parts of the covariance matrix are computed. The
  222. full Jacobian is used to do the computation, i.e. they do not have
  223. an impact on what part of the Jacobian is used for computation.
  224. The return value indicates the success or failure of the covariance
  225. computation. Please see the documentation for
  226. :class:`Covariance::Options` for more on the conditions under which
  227. this function returns ``false``.
  228. .. function:: bool GetCovarianceBlock(const double* parameter_block1, const double* parameter_block2, double* covariance_block) const
  229. Return the block of the cross-covariance matrix corresponding to
  230. ``parameter_block1`` and ``parameter_block2``.
  231. Compute must be called before the first call to ``GetCovarianceBlock``
  232. and the pair ``<parameter_block1, parameter_block2>`` OR the pair
  233. ``<parameter_block2, parameter_block1>`` must have been present in the
  234. vector covariance_blocks when ``Compute`` was called. Otherwise
  235. ``GetCovarianceBlock`` will return false.
  236. ``covariance_block`` must point to a memory location that can store
  237. a ``parameter_block1_size x parameter_block2_size`` matrix. The
  238. returned covariance will be a row-major matrix.
  239. .. function:: bool GetCovarianceBlockInTangentSpace(const double* parameter_block1, const double* parameter_block2, double* covariance_block) const
  240. Return the block of the cross-covariance matrix corresponding to
  241. ``parameter_block1`` and ``parameter_block2``.
  242. Returns cross-covariance in the tangent space if a local
  243. parameterization is associated with either parameter block;
  244. else returns cross-covariance in the ambient space.
  245. Compute must be called before the first call to ``GetCovarianceBlock``
  246. and the pair ``<parameter_block1, parameter_block2>`` OR the pair
  247. ``<parameter_block2, parameter_block1>`` must have been present in the
  248. vector covariance_blocks when ``Compute`` was called. Otherwise
  249. ``GetCovarianceBlock`` will return false.
  250. ``covariance_block`` must point to a memory location that can store
  251. a ``parameter_block1_local_size x parameter_block2_local_size`` matrix. The
  252. returned covariance will be a row-major matrix.
  253. Example Usage
  254. =============
  255. .. code-block:: c++
  256. double x[3];
  257. double y[2];
  258. Problem problem;
  259. problem.AddParameterBlock(x, 3);
  260. problem.AddParameterBlock(y, 2);
  261. <Build Problem>
  262. <Solve Problem>
  263. Covariance::Options options;
  264. Covariance covariance(options);
  265. std::vector<std::pair<const double*, const double*> > covariance_blocks;
  266. covariance_blocks.push_back(make_pair(x, x));
  267. covariance_blocks.push_back(make_pair(y, y));
  268. covariance_blocks.push_back(make_pair(x, y));
  269. CHECK(covariance.Compute(covariance_blocks, &problem));
  270. double covariance_xx[3 * 3];
  271. double covariance_yy[2 * 2];
  272. double covariance_xy[3 * 2];
  273. covariance.GetCovarianceBlock(x, x, covariance_xx)
  274. covariance.GetCovarianceBlock(y, y, covariance_yy)
  275. covariance.GetCovarianceBlock(x, y, covariance_xy)