TutorialLinearAlgebra.dox 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299
  1. namespace Eigen {
  2. /** \eigenManualPage TutorialLinearAlgebra Linear algebra and decompositions
  3. This page explains how to solve linear systems, compute various decompositions such as LU,
  4. QR, %SVD, eigendecompositions... After reading this page, don't miss our
  5. \link TopicLinearAlgebraDecompositions catalogue \endlink of dense matrix decompositions.
  6. \eigenAutoToc
  7. \section TutorialLinAlgBasicSolve Basic linear solving
  8. \b The \b problem: You have a system of equations, that you have written as a single matrix equation
  9. \f[ Ax \: = \: b \f]
  10. Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x.
  11. \b The \b solution: You can choose between various decompositions, depending on the properties of your matrix \a A,
  12. and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases,
  13. and is a good compromise:
  14. <table class="example">
  15. <tr><th>Example:</th><th>Output:</th></tr>
  16. <tr>
  17. <td>\include TutorialLinAlgExSolveColPivHouseholderQR.cpp </td>
  18. <td>\verbinclude TutorialLinAlgExSolveColPivHouseholderQR.out </td>
  19. </tr>
  20. </table>
  21. In this example, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR. Since here the
  22. matrix is of type Matrix3f, this line could have been replaced by:
  23. \code
  24. ColPivHouseholderQR<Matrix3f> dec(A);
  25. Vector3f x = dec.solve(b);
  26. \endcode
  27. Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it
  28. works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from,
  29. depending on your matrix, the problem you are trying to solve, and the trade-off you want to make:
  30. <table class="manual">
  31. <tr>
  32. <th>Decomposition</th>
  33. <th>Method</th>
  34. <th>Requirements<br/>on the matrix</th>
  35. <th>Speed<br/> (small-to-medium)</th>
  36. <th>Speed<br/> (large)</th>
  37. <th>Accuracy</th>
  38. </tr>
  39. <tr>
  40. <td>PartialPivLU</td>
  41. <td>partialPivLu()</td>
  42. <td>Invertible</td>
  43. <td>++</td>
  44. <td>++</td>
  45. <td>+</td>
  46. </tr>
  47. <tr class="alt">
  48. <td>FullPivLU</td>
  49. <td>fullPivLu()</td>
  50. <td>None</td>
  51. <td>-</td>
  52. <td>- -</td>
  53. <td>+++</td>
  54. </tr>
  55. <tr>
  56. <td>HouseholderQR</td>
  57. <td>householderQr()</td>
  58. <td>None</td>
  59. <td>++</td>
  60. <td>++</td>
  61. <td>+</td>
  62. </tr>
  63. <tr class="alt">
  64. <td>ColPivHouseholderQR</td>
  65. <td>colPivHouseholderQr()</td>
  66. <td>None</td>
  67. <td>+</td>
  68. <td>-</td>
  69. <td>+++</td>
  70. </tr>
  71. <tr>
  72. <td>FullPivHouseholderQR</td>
  73. <td>fullPivHouseholderQr()</td>
  74. <td>None</td>
  75. <td>-</td>
  76. <td>- -</td>
  77. <td>+++</td>
  78. </tr>
  79. <tr class="alt">
  80. <td>CompleteOrthogonalDecomposition</td>
  81. <td>completeOrthogonalDecomposition()</td>
  82. <td>None</td>
  83. <td>+</td>
  84. <td>-</td>
  85. <td>+++</td>
  86. </tr>
  87. <tr class="alt">
  88. <td>LLT</td>
  89. <td>llt()</td>
  90. <td>Positive definite</td>
  91. <td>+++</td>
  92. <td>+++</td>
  93. <td>+</td>
  94. </tr>
  95. <tr>
  96. <td>LDLT</td>
  97. <td>ldlt()</td>
  98. <td>Positive or negative<br/> semidefinite</td>
  99. <td>+++</td>
  100. <td>+</td>
  101. <td>++</td>
  102. </tr>
  103. <tr class="alt">
  104. <td>BDCSVD</td>
  105. <td>bdcSvd()</td>
  106. <td>None</td>
  107. <td>-</td>
  108. <td>-</td>
  109. <td>+++</td>
  110. </tr>
  111. <tr class="alt">
  112. <td>JacobiSVD</td>
  113. <td>jacobiSvd()</td>
  114. <td>None</td>
  115. <td>-</td>
  116. <td>- - -</td>
  117. <td>+++</td>
  118. </tr>
  119. </table>
  120. To get an overview of the true relative speed of the different decompositions, check this \link DenseDecompositionBenchmark benchmark \endlink.
  121. All of these decompositions offer a solve() method that works as in the above example.
  122. If you know more about the properties of your matrix, you can use the above table to select the best method.
  123. For example, a good choice for solving linear systems with a non-symmetric matrix of full rank is PartialPivLU.
  124. If you know that your matrix is also symmetric and positive definite, the above table says that
  125. a very good choice is the LLT or LDLT decomposition. Here's an example, also demonstrating that using a general
  126. matrix (not a vector) as right hand side is possible:
  127. <table class="example">
  128. <tr><th>Example:</th><th>Output:</th></tr>
  129. <tr>
  130. <td>\include TutorialLinAlgExSolveLDLT.cpp </td>
  131. <td>\verbinclude TutorialLinAlgExSolveLDLT.out </td>
  132. </tr>
  133. </table>
  134. For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing all decompositions supported by Eigen (notice that Eigen
  135. supports many other decompositions), see our special page on
  136. \ref TopicLinearAlgebraDecompositions "this topic".
  137. \section TutorialLinAlgLeastsquares Least squares solving
  138. The most general and accurate method to solve under- or over-determined linear systems
  139. in the least squares sense, is the SVD decomposition. Eigen provides two implementations.
  140. The recommended one is the BDCSVD class, which scales well for large problems
  141. and automatically falls back to the JacobiSVD class for smaller problems.
  142. For both classes, their solve() method solved the linear system in the least-squares
  143. sense.
  144. Here is an example:
  145. <table class="example">
  146. <tr><th>Example:</th><th>Output:</th></tr>
  147. <tr>
  148. <td>\include TutorialLinAlgSVDSolve.cpp </td>
  149. <td>\verbinclude TutorialLinAlgSVDSolve.out </td>
  150. </tr>
  151. </table>
  152. An alternative to the SVD, which is usually faster and about as accurate, is CompleteOrthogonalDecomposition.
  153. Again, if you know more about the problem, the table above contains methods that are potentially faster.
  154. If your matrix is full rank, HouseHolderQR is the method of choice. If your matrix is full rank and well conditioned,
  155. using the Cholesky decomposition (LLT) on the matrix of the normal equations can be faster still.
  156. Our page on \link LeastSquares least squares solving \endlink has more details.
  157. \section TutorialLinAlgSolutionExists Checking if a matrix is singular
  158. Only you know what error margin you want to allow for a solution to be considered valid.
  159. So Eigen lets you do this computation for yourself, if you want to, as in this example:
  160. <table class="example">
  161. <tr><th>Example:</th><th>Output:</th></tr>
  162. <tr>
  163. <td>\include TutorialLinAlgExComputeSolveError.cpp </td>
  164. <td>\verbinclude TutorialLinAlgExComputeSolveError.out </td>
  165. </tr>
  166. </table>
  167. \section TutorialLinAlgEigensolving Computing eigenvalues and eigenvectors
  168. You need an eigendecomposition here, see available such decompositions on \ref TopicLinearAlgebraDecompositions "this page".
  169. Make sure to check if your matrix is self-adjoint, as is often the case in these problems. Here's an example using
  170. SelfAdjointEigenSolver, it could easily be adapted to general matrices using EigenSolver or ComplexEigenSolver.
  171. The computation of eigenvalues and eigenvectors does not necessarily converge, but such failure to converge is
  172. very rare. The call to info() is to check for this possibility.
  173. <table class="example">
  174. <tr><th>Example:</th><th>Output:</th></tr>
  175. <tr>
  176. <td>\include TutorialLinAlgSelfAdjointEigenSolver.cpp </td>
  177. <td>\verbinclude TutorialLinAlgSelfAdjointEigenSolver.out </td>
  178. </tr>
  179. </table>
  180. \section TutorialLinAlgInverse Computing inverse and determinant
  181. First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts,
  182. in \em numerical linear algebra they are not as useful as in pure mathematics. Inverse computations are often
  183. advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix
  184. is invertible.
  185. However, for \em very \em small matrices, the above may not be true, and inverse and determinant can be very useful.
  186. While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also
  187. call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this
  188. allows Eigen to avoid performing a LU decomposition, and instead use formulas that are more efficient on such small matrices.
  189. Here is an example:
  190. <table class="example">
  191. <tr><th>Example:</th><th>Output:</th></tr>
  192. <tr>
  193. <td>\include TutorialLinAlgInverseDeterminant.cpp </td>
  194. <td>\verbinclude TutorialLinAlgInverseDeterminant.out </td>
  195. </tr>
  196. </table>
  197. \section TutorialLinAlgSeparateComputation Separating the computation from the construction
  198. In the above examples, the decomposition was computed at the same time that the decomposition object was constructed.
  199. There are however situations where you might want to separate these two things, for example if you don't know,
  200. at the time of the construction, the matrix that you will want to decompose; or if you want to reuse an existing
  201. decomposition object.
  202. What makes this possible is that:
  203. \li all decompositions have a default constructor,
  204. \li all decompositions have a compute(matrix) method that does the computation, and that may be called again
  205. on an already-computed decomposition, reinitializing it.
  206. For example:
  207. <table class="example">
  208. <tr><th>Example:</th><th>Output:</th></tr>
  209. <tr>
  210. <td>\include TutorialLinAlgComputeTwice.cpp </td>
  211. <td>\verbinclude TutorialLinAlgComputeTwice.out </td>
  212. </tr>
  213. </table>
  214. Finally, you can tell the decomposition constructor to preallocate storage for decomposing matrices of a given size,
  215. so that when you subsequently decompose such matrices, no dynamic memory allocation is performed (of course, if you
  216. are using fixed-size matrices, no dynamic memory allocation happens at all). This is done by just
  217. passing the size to the decomposition constructor, as in this example:
  218. \code
  219. HouseholderQR<MatrixXf> qr(50,50);
  220. MatrixXf A = MatrixXf::Random(50,50);
  221. qr.compute(A); // no dynamic memory allocation
  222. \endcode
  223. \section TutorialLinAlgRankRevealing Rank-revealing decompositions
  224. Certain decompositions are rank-revealing, i.e. are able to compute the rank of a matrix. These are typically
  225. also the decompositions that behave best in the face of a non-full-rank matrix (which in the square case means a
  226. singular matrix). On \ref TopicLinearAlgebraDecompositions "this table" you can see for all our decompositions
  227. whether they are rank-revealing or not.
  228. Rank-revealing decompositions offer at least a rank() method. They can also offer convenience methods such as isInvertible(),
  229. and some are also providing methods to compute the kernel (null-space) and image (column-space) of the matrix, as is the
  230. case with FullPivLU:
  231. <table class="example">
  232. <tr><th>Example:</th><th>Output:</th></tr>
  233. <tr>
  234. <td>\include TutorialLinAlgRankRevealing.cpp </td>
  235. <td>\verbinclude TutorialLinAlgRankRevealing.out </td>
  236. </tr>
  237. </table>
  238. Of course, any rank computation depends on the choice of an arbitrary threshold, since practically no
  239. floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends
  240. on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we
  241. could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold()
  242. on your decomposition object before calling rank() or any other method that needs to use such a threshold.
  243. The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the
  244. decomposition after you've changed the threshold.
  245. <table class="example">
  246. <tr><th>Example:</th><th>Output:</th></tr>
  247. <tr>
  248. <td>\include TutorialLinAlgSetThreshold.cpp </td>
  249. <td>\verbinclude TutorialLinAlgSetThreshold.out </td>
  250. </tr>
  251. </table>
  252. */
  253. }