Near-Optimal Weighted Matrix Completion

Authors: Oscar López

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments are presented that validate the theoretical behavior derived for several example weighted programs.
Researcher Affiliation Academia Oscar L opez EMAIL Harbor Branch Oceanographic Institute Florida Atlantic University Fort Pierce, FL 34946, USA
Pseudocode No The paper describes programs like (4), (1), and (5) using mathematical formulations and equations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block with structured, step-by-step instructions.
Open Source Code No The paper mentions using 'LR-BPDN implementation introduced by Aravkin et al. 2014', which is a third-party tool. There is no explicit statement or link indicating that the authors' own code for the methodology described in this paper is publicly available.
Open Datasets No The paper states, 'Let D = UrΣr V r Rn1 n2, where Ur Rn1 r and V r Rn2 r are constructed by orthogonalizing the columns of a standard random Gaussian matrix with r columns and normalizing so that D F = 1.' This indicates the use of custom-generated synthetic data rather than a publicly available dataset.
Dataset Splits No The paper describes generating synthetic data and sampling observed entries: 'The set of observed matrix entries is selected uniformly at random from all subsets of the same cardinality |Ω| = λ(n1n2), where λ [0, 1] will be varied to specify a desired sampling percentage.' It does not specify traditional training, validation, or test dataset splits for a fixed dataset.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, or memory) used for running the numerical experiments.
Software Dependencies No The paper mentions using 'LR-BPDN implementation introduced by Aravkin et al. 2014' but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes The setup of Eftekhari et al. 2018b is adopted to generate a data matrix and subspace information. Let D = UrΣr V r Rn1 n2, where Ur Rn1 r and V r Rn2 r are constructed by orthogonalizing the columns of a standard random Gaussian matrix with r columns and normalizing so that D F = 1. To obtain prior knowledge, a perturbed matrix is generated D = D + N where the entries of N Rn1 n2 are i.i.d. Gaussian random variables with variance σ2 that will be toggled to select a desired PABS. Then U Rn1 r and V Rn2 r are the leading r left and right singular vectors of D. The dimensions are set to n1 = n2 = 500 and r = 50. The set of observed matrix entries is selected uniformly at random from all subsets of the same cardinality |Ω| = λ(n1n2), where λ [0, 1] will be varied to specify a desired sampling percentage. In each experiment, D, N and Ωare generated independently and programs (4) and (5) are solved with ω = ω1ω2 varying in (0,1] (setting ω1 = ω2). The plots below present the average relative errors of 100 independent trials via trustworthy and relatively inaccurate subspace estimates.