Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery

Authors: Christian Kümmerle, Juliane Sigl

JMLR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix X P Cd1ˆd2 of rank r ! minpd1, d2q from incomplete linear observations, solving a sequence of low complexity linear problems. The easily implementable algorithm, which we call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-p quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a remarkable global convergence behavior of the algorithm s iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to 1 even for a number of measurements very close to the theoretical lower bound rpd1 d2 rq, i.e., already for significantly fewer linear observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order 2 p) if the linear observations fulfill a suitable null space property. While for the first two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result. [...] Numerical experiments and comparisons to state-of-the-art methods for low-rank matrix recovery are carried out in Section 5.
Researcher Affiliation Academia Christian K ummerle EMAIL Department of Mathematics Technische Universit at M unchen Boltzmannstr. 3, 85748 Garching/Munich, Germany Juliane Sigl EMAIL Department of Mathematics Technische Universit at M unchen Boltzmannstr. 3, 85748 Garching/Munich, Germany
Pseudocode Yes Algorithm 1 Harmonic Mean IRLS for low-rank matrix recovery (HM-IRLS) Input: A linear map Φ : Md1ˆd2 Ñ Cm, image Y Φp X0q of the ground truth matrix X0 P Md1ˆd2, rank estimate rr, non-convexity parameter 0 ă p ď 1. Output: Sequence p Xpnqqn0 n 1 Ă Md1ˆd2. Initialize n 0, ϵp0q 1 and Ă W p0q Id1d2 P Md1d2ˆd1d2. repeat Xpn 1q arg min Φp Xq Y }Xvec}2 ℓ2pĂ W pnqq ppĂ Wpnqq 1 Φ pΦ ppĂ Wpnqq 1 Φ q 1p Y q , (13) ϵpn 1q min ϵpnq, σrr 1p Xpn 1qq , (14) Ă W pn 1q 2 Upn 1qpsΣpn 1q d1 q2 p Upn 1q V pn 1qpsΣpn 1q d2 q2 p V pn 1q ı 1 , (15) where Upn 1q P Ud1 and V pn 1q P Ud2 are matrices containing the left and right singular vectors of Xpn 1q in its columns, and the sΣpn 1q dt are defined for t P t1, 2u according to (11). n n 1, until stopping criterion is met. Set n0 n.
Open Source Code Yes An implementation of HM-IRLS for matrix completion including code reproducing many conducted experiments is available at https://github.com/ckuemmerle/hm_irls.
Open Datasets No In the experiments, we sample pd1 ˆ d2q dimensional ground truth matrices X0 of rank r such that X0 UΣV , where U P Rd1ˆr and V P Rd2ˆr are independent matrices with i.i.d. standard Gaussian entries and Σ P Rrˆr is a diagonal matrix with i.i.d. standard Gaussian diagonal entries, independent from U and V .
Dataset Splits No The paper describes generating synthetic data for experiments and how measurements are sampled from this generated data (e.g., 'sampling m tρdfu entries of X0 uniformly'). However, it does not provide details on fixed train/test/validation splits for a pre-existing dataset.
Hardware Specification No The numerical experiments are conducted on Linux and Mac systems with MATLAB R2017b. [...] For the matrix completion case, this allows us to recover low-rank matrices up to, e.g., d1 d2 3000 on a single machine given very few entries with HM-IRLS.
Software Dependencies Yes The numerical experiments are conducted on Linux and Mac systems with MATLAB R2017b.
Experiment Setup Yes In the experiments, we sample pd1 ˆ d2q dimensional ground truth matrices X0 of rank r such that X0 UΣV , where U P Rd1ˆr and V P Rd2ˆr are independent matrices with i.i.d. standard Gaussian entries and Σ P Rrˆr is a diagonal matrix with i.i.d. standard Gaussian diagonal entries, independent from U and V . We choose d1 d2 40, r 10 and distinguish easy, hard and very hard problems corresponding to oversampling factors ρ of 2.0, 1.2 and 1.0, respectively. The algorithms are provided with the ground truth rank r and are stopped whenever the relative change of Frobenius norm }Xpnq Xpn 1q}F {}Xpn 1q}F drops below the threshold of 10 10 or a maximal iteration of iterations nmax is reached.