MOCCA: Mirrored Convex/Concave Optimization for Nonconvex Composite Functions

Authors: Rina Foygel Barber, Emil Y. Sidky

JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show fast convergence for several structured signal recovery problems. Keywords: MOCCA, ADMM, nonconvex, penalized likelihood, total variation, computed tomography
Researcher Affiliation Academia Rina Foygel Barber EMAIL Department of Statistics University of Chicago 5747 South Ellis Avenue Chicago, IL 60637, USA Emil Y. Sidky EMAIL Department of Radiology University of Chicago 5841 South Maryland Avenue Chicago, IL 60637, USA
Pseudocode Yes Algorithm 1 mocca algorithm Input: Functions F, G with local convex approximations Fv, Gz, linear operator K, positive diagonal step size matrices Σ, T, extrapolation parameter θ [0, 1]. Initialize: Primal point x0 Rd, dual point w0 Rm, expansion points (z0, v0) Rd Rm. for t = 0, 1, 2, . . . do Update x and w variables: writing xt+1 = xt+1 + θ(xt+1 xt), define ... Algorithm 2 mocca algorithm (stable version with inner loop) ... Algorithm 3 mocca algorithm: ADMM version
Open Source Code Yes Code for fully reproducing these simulations is available at http://www.stat.uchicago.edu/~rina/mocca.html.
Open Datasets No We generate data as follows: first, we define the true signal xtrue Rd with dimension d = 625, obtained by vectorizing the two-dimensional locally constant array... We generate the signal xtrue, the design matrix A, and the response vector b as in Simulation 1.
Dataset Splits No The paper describes generating synthetic data and applying the algorithm to it, but does not specify any training/test/validation splits, which are typically used for evaluating machine learning models on pre-existing datasets.
Hardware Specification No No specific hardware details (like GPU/CPU models, memory, or processor types) are mentioned in the paper for the experimental setup.
Software Dependencies Yes All computations were performed in matlab (MATLAB, 2015).
Experiment Setup Yes For all simulations, we choose not to place a bound on x restrict... In the first simulation... we choose penalty parameter ν = 20 and nonconvexity parameter β = 3... Finally, we choose step size parameters Σ = λ^{-1} Id_m and T = λ^{-1} Id_d... We test the algorithm across a range of λ values, λ {4, 8, 16, 32, 64}... In the second simulation... with penalty parameter ν = 20... setting η = λ = 100 or η = λ = 200... We consider stopping rules for the inner loop as follows: either we run the inner loop for a fixed number of steps, nstep {1, 5} ... or we use a convergence criterion ϵthresh {0.1, 0.05, 0.01}.