Affine Rank Minimization via Asymptotic Log-Det Iteratively Reweighted Least Squares

Authors: Sebastian Krämer

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we analyze several presented aspects empirically in a series of numerical experiments. In particular, allowing for instance sufficiently many iterations, one may even observe a phase transition for generic recoverability at the absolute theoretical minimum.
Researcher Affiliation Academia Sebastian Kr amer EMAIL Institut f ur Geometrie und Praktische Mathematik RWTH Aachen University Aachen, 52062, Germany
Pseudocode Yes Algorithm 1 Asymptotic Minimization 1: set X(0) L 1(y) and γ(0) > 0 2: for i = 1, 2, . . . do 3: X(i) := Ψγ(i 1)(X(i 1)) 4: set γ(i) γ(i 1) according to chosen strategy Algorithm 2 (one-sided, matrix) IRLS-p 1: set p [0, 1], X(0) L 1(y) and γ(0) > 0 (cf. Proposition 14) 2: for i = 1, 2, . . . do 3: W (i 1) := (X(i 1)X(i 1)T + γ(i 1)I)p/2 1 4: X(i) := argmin X L 1(y) W (i 1)1/2X F (cf. (13)) 5: set γ(i) γ(i 1) according to chosen strategy
Open Source Code Yes The Matlab code behind all results is available as public repository under the name a-irls.
Open Datasets No Each measurement vector is constructed via a (not necessarily sought for) rank rrs N reference solution, which in turn relies on a randomly generated low-rank decomposition, y = L(X(rs)) Rℓ, X(rs) = Y (rs)Z(rs) Rn m, Y (rs) Rn rrs, Z(rs) Rrrs m. All entries of the two components Y (rs) and Z(rs) are assigned independent, normally distributed entries. The paper generates synthetic data for its experiments rather than using publicly available datasets, thus no access information for a pre-existing dataset is provided.
Dataset Splits No The paper describes generating synthetic data for each experiment, rather than using a pre-existing dataset with fixed train/test/validation splits. For example, 'Each measurement vector is constructed via a (not necessarily sought for) rank rrs N reference solution, which in turn relies on a randomly generated low-rank decomposition'. Therefore, specific dataset split information in the traditional sense is not applicable or provided.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions 'The Matlab code behind all results' but does not specify the version of Matlab or any other software libraries with their version numbers.
Experiment Setup Yes Based on a sufficiently large starting value γ(0) > 0, we by default choose γ(i) = νγ(i 1), i N, where ν < 1 remains constant throughout each single run of an algorithm. If not otherwise specified, the default weight strength, as it is our main interest, is given through p = 0. Experiment 16: 'Each constellation is repeated 1000 times for kmax = 12.' Experiment 17: 'Each constellation is repeated 1000 times for the increased value kmax = 14 (with ν14 1.00001 1).' Sensitivity Analysis: 'For each instance, we lower the meta parameter ν = νk = νk 1 (cf. Section 5.1.2), starting with ν0 = 1.2, and rerun the respective algorithm from the start until the result is not a failure, or, if after too many reruns k > kmax, we give up and thus either achieve a weak or strong failure depending on the result for k = kmax.'