sigpy.alg.GradientMethod

class sigpy.alg.GradientMethod(gradf, x, alpha, proxg=None, accelerate=False, max_iter=100, tol=0)[source]

First order gradient method.

For the simplest setting when proxg is not specified, the method considers the objective function:

\[\min_x f(x)\]

where \(f\) is (sub)-differentiable and performs the update:

\[x_\text{new} = x - \alpha \nabla f(x)\]

When proxg is specified, the method considers the composite objective function:

\[f(x) + g(x)\]

where \(f\) is (sub)-differentiable and \(g\) is simple, and performs the update:

\[x_\text{new} = \text{prox}_{\alpha g}(x - \alpha \nabla f(x))\]

Nesterov’s acceleration is supported by toggling the accelerate input option.

Parameters:
  • gradf (function) – function to compute \(\nabla f\).
  • x (array) – variable to optimize over.
  • alpha (float or None) – step size, or initial step size if backtracking line-search is on.
  • proxg (Prox, function or None) – Prox or function to compute proximal operator of \(g\).
  • accelerate (bool) – toggle Nesterov acceleration.
  • max_iter (int) – maximum number of iterations.
  • tol (float) – Tolerance for stopping condition.

References

Nesterov, Y. E. (1983). A method for solving the convex programming problem with convergence rate O (1/k^ 2). In Dokl. Akad. Nauk SSSR (Vol. 269, pp. 543-547).

Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), 183-202.

__init__(gradf, x, alpha, proxg=None, accelerate=False, max_iter=100, tol=0)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(gradf, x, alpha[, proxg, …]) Initialize self.
done() Return whether the algorithm is done.
update() Perform one update step.