trf.py 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560
  1. """Trust Region Reflective algorithm for least-squares optimization.
  2. The algorithm is based on ideas from paper [STIR]_. The main idea is to
  3. account for the presence of the bounds by appropriate scaling of the variables (or,
  4. equivalently, changing a trust-region shape). Let's introduce a vector v:
  5. | ub[i] - x[i], if g[i] < 0 and ub[i] < np.inf
  6. v[i] = | x[i] - lb[i], if g[i] > 0 and lb[i] > -np.inf
  7. | 1, otherwise
  8. where g is the gradient of a cost function and lb, ub are the bounds. Its
  9. components are distances to the bounds at which the anti-gradient points (if
  10. this distance is finite). Define a scaling matrix D = diag(v**0.5).
  11. First-order optimality conditions can be stated as
  12. D^2 g(x) = 0.
  13. Meaning that components of the gradient should be zero for strictly interior
  14. variables, and components must point inside the feasible region for variables
  15. on the bound.
  16. Now consider this system of equations as a new optimization problem. If the
  17. point x is strictly interior (not on the bound), then the left-hand side is
  18. differentiable and the Newton step for it satisfies
  19. (D^2 H + diag(g) Jv) p = -D^2 g
  20. where H is the Hessian matrix (or its J^T J approximation in least squares),
  21. Jv is the Jacobian matrix of v with components -1, 1 or 0, such that all
  22. elements of matrix C = diag(g) Jv are non-negative. Introduce the change
  23. of the variables x = D x_h (_h would be "hat" in LaTeX). In the new variables,
  24. we have a Newton step satisfying
  25. B_h p_h = -g_h,
  26. where B_h = D H D + C, g_h = D g. In least squares B_h = J_h^T J_h, where
  27. J_h = J D. Note that J_h and g_h are proper Jacobian and gradient with respect
  28. to "hat" variables. To guarantee global convergence we formulate a
  29. trust-region problem based on the Newton step in the new variables:
  30. 0.5 * p_h^T B_h p + g_h^T p_h -> min, ||p_h|| <= Delta
  31. In the original space B = H + D^{-1} C D^{-1}, and the equivalent trust-region
  32. problem is
  33. 0.5 * p^T B p + g^T p -> min, ||D^{-1} p|| <= Delta
  34. Here, the meaning of the matrix D becomes more clear: it alters the shape
  35. of a trust-region, such that large steps towards the bounds are not allowed.
  36. In the implementation, the trust-region problem is solved in "hat" space,
  37. but handling of the bounds is done in the original space (see below and read
  38. the code).
  39. The introduction of the matrix D doesn't allow to ignore bounds, the algorithm
  40. must keep iterates strictly feasible (to satisfy aforementioned
  41. differentiability), the parameter theta controls step back from the boundary
  42. (see the code for details).
  43. The algorithm does another important trick. If the trust-region solution
  44. doesn't fit into the bounds, then a reflected (from a firstly encountered
  45. bound) search direction is considered. For motivation and analysis refer to
  46. [STIR]_ paper (and other papers of the authors). In practice, it doesn't need
  47. a lot of justifications, the algorithm simply chooses the best step among
  48. three: a constrained trust-region step, a reflected step and a constrained
  49. Cauchy step (a minimizer along -g_h in "hat" space, or -D^2 g in the original
  50. space).
  51. Another feature is that a trust-region radius control strategy is modified to
  52. account for appearance of the diagonal C matrix (called diag_h in the code).
  53. Note that all described peculiarities are completely gone as we consider
  54. problems without bounds (the algorithm becomes a standard trust-region type
  55. algorithm very similar to ones implemented in MINPACK).
  56. The implementation supports two methods of solving the trust-region problem.
  57. The first, called 'exact', applies SVD on Jacobian and then solves the problem
  58. very accurately using the algorithm described in [JJMore]_. It is not
  59. applicable to large problem. The second, called 'lsmr', uses the 2-D subspace
  60. approach (sometimes called "indefinite dogleg"), where the problem is solved
  61. in a subspace spanned by the gradient and the approximate Gauss-Newton step
  62. found by ``scipy.sparse.linalg.lsmr``. A 2-D trust-region problem is
  63. reformulated as a 4th order algebraic equation and solved very accurately by
  64. ``numpy.roots``. The subspace approach allows to solve very large problems
  65. (up to couple of millions of residuals on a regular PC), provided the Jacobian
  66. matrix is sufficiently sparse.
  67. References
  68. ----------
  69. .. [STIR] Branch, M.A., T.F. Coleman, and Y. Li, "A Subspace, Interior,
  70. and Conjugate Gradient Method for Large-Scale Bound-Constrained
  71. Minimization Problems," SIAM Journal on Scientific Computing,
  72. Vol. 21, Number 1, pp 1-23, 1999.
  73. .. [JJMore] More, J. J., "The Levenberg-Marquardt Algorithm: Implementation
  74. and Theory," Numerical Analysis, ed. G. A. Watson, Lecture
  75. """
  76. import numpy as np
  77. from numpy.linalg import norm
  78. from scipy.linalg import svd, qr
  79. from scipy.sparse.linalg import lsmr
  80. from scipy.optimize import OptimizeResult
  81. from .common import (
  82. step_size_to_bound, find_active_constraints, in_bounds,
  83. make_strictly_feasible, intersect_trust_region, solve_lsq_trust_region,
  84. solve_trust_region_2d, minimize_quadratic_1d, build_quadratic_1d,
  85. evaluate_quadratic, right_multiplied_operator, regularized_lsq_operator,
  86. CL_scaling_vector, compute_grad, compute_jac_scale, check_termination,
  87. update_tr_radius, scale_for_robust_loss_function, print_header_nonlinear,
  88. print_iteration_nonlinear)
  89. def trf(fun, jac, x0, f0, J0, lb, ub, ftol, xtol, gtol, max_nfev, x_scale,
  90. loss_function, tr_solver, tr_options, verbose):
  91. # For efficiency, it makes sense to run the simplified version of the
  92. # algorithm when no bounds are imposed. We decided to write the two
  93. # separate functions. It violates the DRY principle, but the individual
  94. # functions are kept the most readable.
  95. if np.all(lb == -np.inf) and np.all(ub == np.inf):
  96. return trf_no_bounds(
  97. fun, jac, x0, f0, J0, ftol, xtol, gtol, max_nfev, x_scale,
  98. loss_function, tr_solver, tr_options, verbose)
  99. else:
  100. return trf_bounds(
  101. fun, jac, x0, f0, J0, lb, ub, ftol, xtol, gtol, max_nfev, x_scale,
  102. loss_function, tr_solver, tr_options, verbose)
  103. def select_step(x, J_h, diag_h, g_h, p, p_h, d, Delta, lb, ub, theta):
  104. """Select the best step according to Trust Region Reflective algorithm."""
  105. if in_bounds(x + p, lb, ub):
  106. p_value = evaluate_quadratic(J_h, g_h, p_h, diag=diag_h)
  107. return p, p_h, -p_value
  108. p_stride, hits = step_size_to_bound(x, p, lb, ub)
  109. # Compute the reflected direction.
  110. r_h = np.copy(p_h)
  111. r_h[hits.astype(bool)] *= -1
  112. r = d * r_h
  113. # Restrict trust-region step, such that it hits the bound.
  114. p *= p_stride
  115. p_h *= p_stride
  116. x_on_bound = x + p
  117. # Reflected direction will cross first either feasible region or trust
  118. # region boundary.
  119. _, to_tr = intersect_trust_region(p_h, r_h, Delta)
  120. to_bound, _ = step_size_to_bound(x_on_bound, r, lb, ub)
  121. # Find lower and upper bounds on a step size along the reflected
  122. # direction, considering the strict feasibility requirement. There is no
  123. # single correct way to do that, the chosen approach seems to work best
  124. # on test problems.
  125. r_stride = min(to_bound, to_tr)
  126. if r_stride > 0:
  127. r_stride_l = (1 - theta) * p_stride / r_stride
  128. if r_stride == to_bound:
  129. r_stride_u = theta * to_bound
  130. else:
  131. r_stride_u = to_tr
  132. else:
  133. r_stride_l = 0
  134. r_stride_u = -1
  135. # Check if reflection step is available.
  136. if r_stride_l <= r_stride_u:
  137. a, b, c = build_quadratic_1d(J_h, g_h, r_h, s0=p_h, diag=diag_h)
  138. r_stride, r_value = minimize_quadratic_1d(
  139. a, b, r_stride_l, r_stride_u, c=c)
  140. r_h *= r_stride
  141. r_h += p_h
  142. r = r_h * d
  143. else:
  144. r_value = np.inf
  145. # Now correct p_h to make it strictly interior.
  146. p *= theta
  147. p_h *= theta
  148. p_value = evaluate_quadratic(J_h, g_h, p_h, diag=diag_h)
  149. ag_h = -g_h
  150. ag = d * ag_h
  151. to_tr = Delta / norm(ag_h)
  152. to_bound, _ = step_size_to_bound(x, ag, lb, ub)
  153. if to_bound < to_tr:
  154. ag_stride = theta * to_bound
  155. else:
  156. ag_stride = to_tr
  157. a, b = build_quadratic_1d(J_h, g_h, ag_h, diag=diag_h)
  158. ag_stride, ag_value = minimize_quadratic_1d(a, b, 0, ag_stride)
  159. ag_h *= ag_stride
  160. ag *= ag_stride
  161. if p_value < r_value and p_value < ag_value:
  162. return p, p_h, -p_value
  163. elif r_value < p_value and r_value < ag_value:
  164. return r, r_h, -r_value
  165. else:
  166. return ag, ag_h, -ag_value
  167. def trf_bounds(fun, jac, x0, f0, J0, lb, ub, ftol, xtol, gtol, max_nfev,
  168. x_scale, loss_function, tr_solver, tr_options, verbose):
  169. x = x0.copy()
  170. f = f0
  171. f_true = f.copy()
  172. nfev = 1
  173. J = J0
  174. njev = 1
  175. m, n = J.shape
  176. if loss_function is not None:
  177. rho = loss_function(f)
  178. cost = 0.5 * np.sum(rho[0])
  179. J, f = scale_for_robust_loss_function(J, f, rho)
  180. else:
  181. cost = 0.5 * np.dot(f, f)
  182. g = compute_grad(J, f)
  183. jac_scale = isinstance(x_scale, str) and x_scale == 'jac'
  184. if jac_scale:
  185. scale, scale_inv = compute_jac_scale(J)
  186. else:
  187. scale, scale_inv = x_scale, 1 / x_scale
  188. v, dv = CL_scaling_vector(x, g, lb, ub)
  189. v[dv != 0] *= scale_inv[dv != 0]
  190. Delta = norm(x0 * scale_inv / v**0.5)
  191. if Delta == 0:
  192. Delta = 1.0
  193. g_norm = norm(g * v, ord=np.inf)
  194. f_augmented = np.zeros((m + n))
  195. if tr_solver == 'exact':
  196. J_augmented = np.empty((m + n, n))
  197. elif tr_solver == 'lsmr':
  198. reg_term = 0.0
  199. regularize = tr_options.pop('regularize', True)
  200. if max_nfev is None:
  201. max_nfev = x0.size * 100
  202. alpha = 0.0 # "Levenberg-Marquardt" parameter
  203. termination_status = None
  204. iteration = 0
  205. step_norm = None
  206. actual_reduction = None
  207. if verbose == 2:
  208. print_header_nonlinear()
  209. while True:
  210. v, dv = CL_scaling_vector(x, g, lb, ub)
  211. g_norm = norm(g * v, ord=np.inf)
  212. if g_norm < gtol:
  213. termination_status = 1
  214. if verbose == 2:
  215. print_iteration_nonlinear(iteration, nfev, cost, actual_reduction,
  216. step_norm, g_norm)
  217. if termination_status is not None or nfev == max_nfev:
  218. break
  219. # Now compute variables in "hat" space. Here, we also account for
  220. # scaling introduced by `x_scale` parameter. This part is a bit tricky,
  221. # you have to write down the formulas and see how the trust-region
  222. # problem is formulated when the two types of scaling are applied.
  223. # The idea is that first we apply `x_scale` and then apply Coleman-Li
  224. # approach in the new variables.
  225. # v is recomputed in the variables after applying `x_scale`, note that
  226. # components which were identically 1 not affected.
  227. v[dv != 0] *= scale_inv[dv != 0]
  228. # Here, we apply two types of scaling.
  229. d = v**0.5 * scale
  230. # C = diag(g * scale) Jv
  231. diag_h = g * dv * scale
  232. # After all this has been done, we continue normally.
  233. # "hat" gradient.
  234. g_h = d * g
  235. f_augmented[:m] = f
  236. if tr_solver == 'exact':
  237. J_augmented[:m] = J * d
  238. J_h = J_augmented[:m] # Memory view.
  239. J_augmented[m:] = np.diag(diag_h**0.5)
  240. U, s, V = svd(J_augmented, full_matrices=False)
  241. V = V.T
  242. uf = U.T.dot(f_augmented)
  243. elif tr_solver == 'lsmr':
  244. J_h = right_multiplied_operator(J, d)
  245. if regularize:
  246. a, b = build_quadratic_1d(J_h, g_h, -g_h, diag=diag_h)
  247. to_tr = Delta / norm(g_h)
  248. ag_value = minimize_quadratic_1d(a, b, 0, to_tr)[1]
  249. reg_term = -ag_value / Delta**2
  250. lsmr_op = regularized_lsq_operator(J_h, (diag_h + reg_term)**0.5)
  251. gn_h = lsmr(lsmr_op, f_augmented, **tr_options)[0]
  252. S = np.vstack((g_h, gn_h)).T
  253. S, _ = qr(S, mode='economic')
  254. JS = J_h.dot(S) # LinearOperator does dot too.
  255. B_S = np.dot(JS.T, JS) + np.dot(S.T * diag_h, S)
  256. g_S = S.T.dot(g_h)
  257. # theta controls step back step ratio from the bounds.
  258. theta = max(0.995, 1 - g_norm)
  259. actual_reduction = -1
  260. while actual_reduction <= 0 and nfev < max_nfev:
  261. if tr_solver == 'exact':
  262. p_h, alpha, n_iter = solve_lsq_trust_region(
  263. n, m, uf, s, V, Delta, initial_alpha=alpha)
  264. elif tr_solver == 'lsmr':
  265. p_S, _ = solve_trust_region_2d(B_S, g_S, Delta)
  266. p_h = S.dot(p_S)
  267. p = d * p_h # Trust-region solution in the original space.
  268. step, step_h, predicted_reduction = select_step(
  269. x, J_h, diag_h, g_h, p, p_h, d, Delta, lb, ub, theta)
  270. x_new = make_strictly_feasible(x + step, lb, ub, rstep=0)
  271. f_new = fun(x_new)
  272. nfev += 1
  273. step_h_norm = norm(step_h)
  274. if not np.all(np.isfinite(f_new)):
  275. Delta = 0.25 * step_h_norm
  276. continue
  277. # Usual trust-region step quality estimation.
  278. if loss_function is not None:
  279. cost_new = loss_function(f_new, cost_only=True)
  280. else:
  281. cost_new = 0.5 * np.dot(f_new, f_new)
  282. actual_reduction = cost - cost_new
  283. Delta_new, ratio = update_tr_radius(
  284. Delta, actual_reduction, predicted_reduction,
  285. step_h_norm, step_h_norm > 0.95 * Delta)
  286. step_norm = norm(step)
  287. termination_status = check_termination(
  288. actual_reduction, cost, step_norm, norm(x), ratio, ftol, xtol)
  289. if termination_status is not None:
  290. break
  291. alpha *= Delta / Delta_new
  292. Delta = Delta_new
  293. if actual_reduction > 0:
  294. x = x_new
  295. f = f_new
  296. f_true = f.copy()
  297. cost = cost_new
  298. J = jac(x, f)
  299. njev += 1
  300. if loss_function is not None:
  301. rho = loss_function(f)
  302. J, f = scale_for_robust_loss_function(J, f, rho)
  303. g = compute_grad(J, f)
  304. if jac_scale:
  305. scale, scale_inv = compute_jac_scale(J, scale_inv)
  306. else:
  307. step_norm = 0
  308. actual_reduction = 0
  309. iteration += 1
  310. if termination_status is None:
  311. termination_status = 0
  312. active_mask = find_active_constraints(x, lb, ub, rtol=xtol)
  313. return OptimizeResult(
  314. x=x, cost=cost, fun=f_true, jac=J, grad=g, optimality=g_norm,
  315. active_mask=active_mask, nfev=nfev, njev=njev,
  316. status=termination_status)
  317. def trf_no_bounds(fun, jac, x0, f0, J0, ftol, xtol, gtol, max_nfev,
  318. x_scale, loss_function, tr_solver, tr_options, verbose):
  319. x = x0.copy()
  320. f = f0
  321. f_true = f.copy()
  322. nfev = 1
  323. J = J0
  324. njev = 1
  325. m, n = J.shape
  326. if loss_function is not None:
  327. rho = loss_function(f)
  328. cost = 0.5 * np.sum(rho[0])
  329. J, f = scale_for_robust_loss_function(J, f, rho)
  330. else:
  331. cost = 0.5 * np.dot(f, f)
  332. g = compute_grad(J, f)
  333. jac_scale = isinstance(x_scale, str) and x_scale == 'jac'
  334. if jac_scale:
  335. scale, scale_inv = compute_jac_scale(J)
  336. else:
  337. scale, scale_inv = x_scale, 1 / x_scale
  338. Delta = norm(x0 * scale_inv)
  339. if Delta == 0:
  340. Delta = 1.0
  341. if tr_solver == 'lsmr':
  342. reg_term = 0
  343. damp = tr_options.pop('damp', 0.0)
  344. regularize = tr_options.pop('regularize', True)
  345. if max_nfev is None:
  346. max_nfev = x0.size * 100
  347. alpha = 0.0 # "Levenberg-Marquardt" parameter
  348. termination_status = None
  349. iteration = 0
  350. step_norm = None
  351. actual_reduction = None
  352. if verbose == 2:
  353. print_header_nonlinear()
  354. while True:
  355. g_norm = norm(g, ord=np.inf)
  356. if g_norm < gtol:
  357. termination_status = 1
  358. if verbose == 2:
  359. print_iteration_nonlinear(iteration, nfev, cost, actual_reduction,
  360. step_norm, g_norm)
  361. if termination_status is not None or nfev == max_nfev:
  362. break
  363. d = scale
  364. g_h = d * g
  365. if tr_solver == 'exact':
  366. J_h = J * d
  367. U, s, V = svd(J_h, full_matrices=False)
  368. V = V.T
  369. uf = U.T.dot(f)
  370. elif tr_solver == 'lsmr':
  371. J_h = right_multiplied_operator(J, d)
  372. if regularize:
  373. a, b = build_quadratic_1d(J_h, g_h, -g_h)
  374. to_tr = Delta / norm(g_h)
  375. ag_value = minimize_quadratic_1d(a, b, 0, to_tr)[1]
  376. reg_term = -ag_value / Delta**2
  377. damp_full = (damp**2 + reg_term)**0.5
  378. gn_h = lsmr(J_h, f, damp=damp_full, **tr_options)[0]
  379. S = np.vstack((g_h, gn_h)).T
  380. S, _ = qr(S, mode='economic')
  381. JS = J_h.dot(S)
  382. B_S = np.dot(JS.T, JS)
  383. g_S = S.T.dot(g_h)
  384. actual_reduction = -1
  385. while actual_reduction <= 0 and nfev < max_nfev:
  386. if tr_solver == 'exact':
  387. step_h, alpha, n_iter = solve_lsq_trust_region(
  388. n, m, uf, s, V, Delta, initial_alpha=alpha)
  389. elif tr_solver == 'lsmr':
  390. p_S, _ = solve_trust_region_2d(B_S, g_S, Delta)
  391. step_h = S.dot(p_S)
  392. predicted_reduction = -evaluate_quadratic(J_h, g_h, step_h)
  393. step = d * step_h
  394. x_new = x + step
  395. f_new = fun(x_new)
  396. nfev += 1
  397. step_h_norm = norm(step_h)
  398. if not np.all(np.isfinite(f_new)):
  399. Delta = 0.25 * step_h_norm
  400. continue
  401. # Usual trust-region step quality estimation.
  402. if loss_function is not None:
  403. cost_new = loss_function(f_new, cost_only=True)
  404. else:
  405. cost_new = 0.5 * np.dot(f_new, f_new)
  406. actual_reduction = cost - cost_new
  407. Delta_new, ratio = update_tr_radius(
  408. Delta, actual_reduction, predicted_reduction,
  409. step_h_norm, step_h_norm > 0.95 * Delta)
  410. step_norm = norm(step)
  411. termination_status = check_termination(
  412. actual_reduction, cost, step_norm, norm(x), ratio, ftol, xtol)
  413. if termination_status is not None:
  414. break
  415. alpha *= Delta / Delta_new
  416. Delta = Delta_new
  417. if actual_reduction > 0:
  418. x = x_new
  419. f = f_new
  420. f_true = f.copy()
  421. cost = cost_new
  422. J = jac(x, f)
  423. njev += 1
  424. if loss_function is not None:
  425. rho = loss_function(f)
  426. J, f = scale_for_robust_loss_function(J, f, rho)
  427. g = compute_grad(J, f)
  428. if jac_scale:
  429. scale, scale_inv = compute_jac_scale(J, scale_inv)
  430. else:
  431. step_norm = 0
  432. actual_reduction = 0
  433. iteration += 1
  434. if termination_status is None:
  435. termination_status = 0
  436. active_mask = np.zeros_like(x)
  437. return OptimizeResult(
  438. x=x, cost=cost, fun=f_true, jac=J, grad=g, optimality=g_norm,
  439. active_mask=active_mask, nfev=nfev, njev=njev,
  440. status=termination_status)