Modular Proximal Optimization for Multidimensional Total-Variation Regularization
Authors: Alvaro Barbero, Suvrit Sra
JMLR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We review, analyze, implement, and experiment with a variety of fast algorithms. The ensuing contributions of this paper are summarized below. ...To complement our algorithms, we illustrate several applications of TV prox-operators to: (i) image and video denoising; (ii) image deconvolution; and (iii) four variants of fused-lasso. ...We will now demostrate the effectiveness of the various solvers covered in a wide array of experiments, as well as showing many of their practical applications. |
| Researcher Affiliation | Academia | Alvaro Barbero EMAIL Instituto de Ingenier ıa del Conocimiento and Universidad Aut onoma de Madrid Francisco Tom as y Valiente 11, Madrid, Spain Suvrit Sra EMAIL Laboratory for Information and Decision Systems Massachusetts Institute of Technology (MIT), Cambridge, MA |
| Pseudocode | Yes | Algorithm 1 Taut string algorithm for TV-L1-proximity Algorithm 2 Linearized taut string algorithm for TV-L1-proximity Algorithm 3 Modified lines for weighted version of Algorithm 2 Algorithm 4 MSN based TV-L2 proximity Algorithm 5 GP algorithm for TV-L2 proximity Algorithm 6 Frank-Wolfe (FW) Algorithm 7 Proximal Dykstra Algorithm 8 Parallel-Proximal Dykstra Algorithm 9 Alternating reflections Douglas Rachford (DR) Algorithm 10 Alternating Direction Method of Multipliers (ADMM) Algorithm 11 Stepsize selection for Projected Newton Algorithm 12 PN algorithm for TV-L1-proximity |
| Open Source Code | Yes | The final most important contribution of our paper is a well-tuned, multi-threaded open-source C++, Matlab and Python implementation of all the reviewed and developed methods.2 2. See https://github.com/albarji/prox TV Our publicy available library prox TV includes all these implementations, plus bindings for easy usage in Matlab or Python: https://github.com/albarji/prox TV. |
| Open Datasets | Yes | INRIA holidays (Jegou et al, 2008) LSVRC 2010 val set (Russakovsky et al, 2015) Array CGH (Stransky et al., 2006), Leukemias (Golub et al., 1999), Colon (U. Alon et al., 1999), Ovarian (Rogers et al., 2005) and Rat (Hua et al., 2009). The salesman, coastguard and bicycle sequences were used, which are publicly available at BM3D (2013). |
| Dataset Splits | Yes | Each data set was split into three equal parts (ensuring similar proportion of classes in every split) for training, validation and test. |
| Hardware Specification | No | In addition to the previous experiments and to illustrate the parallelization potential of the presented anisotropic filtering method, Figure 20 plots running times for the PD algorithm as the number of processor core ranges from 1 through 16. We see that for the smaller images, the gains due to more processors essentially flatten out by 8 cores, where synchronization and memory contention offsets potential computational gains (first row). For the larger images, there is steadier speedup as the number of cores increase (in each plot there seems to be a bump at 14 processors; we attribute this to a quirk of the multicore machine that we used). |
| Software Dependencies | No | All the solvers implemented for this paper were coded in C++ for efficiency. Our publicy available library prox TV includes all these implementations, plus bindings for easy usage in Matlab or Python: https://github.com/albarji/prox TV. Matrix operations have been implented by exploiting the LAPACK (Fortran) library (Anderson et al., 1999). |
| Experiment Setup | Yes | A penalty λ [0, 50] is chosen at random for each run, and the data vector y with uniformly random entries yi [ 2λ, 2λ] (proportionally scaled to λ). For Projected Newton and SLEP a duality gap of 10 5 is used as the stopping criterion. For the hybrid taut-string method the switch parameter is set as S = 1.05. In practice we use tolerance values ϵλ = 10 6 and ϵgap = 10 5. We set the penalties to λ1 = λ2 = 10. The penalty parameters were found by exhaustive grid search in the range λ1, λ2 [10 4, 102] to maximize classification accuracy on the validation splits. Values for the regularization parameter λ were found by maximizing the quality of the reconstruction, measured using Improved Signal-to-Noise Ratio (ISNR). |