sigpy.app.LinearLeastSquares

class sigpy.app.LinearLeastSquares(A, y, x=None, proxg=None, lamda=0, G=None, g=None, z=None, solver=None, max_iter=100, P=None, alpha=None, max_power_iter=30, accelerate=True, tau=None, sigma=None, rho=1, max_cg_iter=10, tol=0, save_objective_values=False, show_pbar=True, leave_pbar=True)[source]

Linear least squares application.

Solves for the following problem, with optional regularizations:

\[\min_x \frac{1}{2} \| A x - y \|_2^2 + g(G x) + \frac{\lambda}{2} \| x - z \|_2^2\]

Four solvers can be used: sigpy.alg.ConjugateGradient, sigpy.alg.GradientMethod, sigpy.alg.ADMM, and sigpy.alg.PrimalDualHybridGradient. If solver is None, sigpy.alg.ConjugateGradient is used when proxg is not specified. If proxg is specified, then sigpy.alg.GradientMethod is used when G is specified, and sigpy.alg.PrimalDualHybridGradient is used otherwise.

Parameters:
  • A (Linop) – Forward linear operator.
  • y (array) – Observation.
  • x (array) – Solution.
  • proxg (Prox) – Proximal operator of g.
  • lamda (float) – l2 regularization parameter.
  • g (None or function) – Regularization function. Only used for when save_objective_values is true.
  • G (None or Linop) – Regularization linear operator.
  • z (float or array) – Bias for l2 regularization.
  • solver (str) – {‘ConjugateGradient’, ‘GradientMethod’, ‘PrimalDualHybridGradient’, ‘ADMM’}.
  • max_iter (int) – Maximum number of iterations.
  • P (Linop) – Preconditioner for ConjugateGradient.
  • alpha (None or float) – Step size for GradientMethod.
  • accelerate (bool) – Toggle Nesterov acceleration for GradientMethod.
  • max_power_iter (int) – Maximum number of iterations for power method. Used for GradientMethod when alpha is not specified, and for PrimalDualHybridGradient when tau or sigma is not specified.
  • tau (float) – Primal step-size for PrimalDualHybridGradient.
  • sigma (float) – Dual step-size for PrimalDualHybridGradient.
  • rho (float) – Augmented Lagrangian parameter for ADMM.
  • max_cg_iter (int) – Maximum number of iterations for conjugate gradient in ADMM.
  • save_objective_values (bool) – Toggle saving objective value.
__init__(A, y, x=None, proxg=None, lamda=0, G=None, g=None, z=None, solver=None, max_iter=100, P=None, alpha=None, max_power_iter=30, accelerate=True, tau=None, sigma=None, rho=1, max_cg_iter=10, tol=0, save_objective_values=False, show_pbar=True, leave_pbar=True)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(A, y[, x, proxg, lamda, G, g, z, …]) Initialize self.
objective()
run() Run the App.