Efficient Alternating Minimization with Applications to Weighted Low Rank Approximation
Authors: Zhao Song, Mingquan Ye, Junze Yin, Lichen Zhang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted two experiments showing the performance of our main algorithm (Algorithm 1) and one experiment, particularly for our novel high-precision regression algorithm (Algorithm 2). We first present the first two experiments for our main algorithm. ... Below, we present our experimental results of the matrix completion problem. ... Table 4: Experimental results of Algorithm 1 for the matrix completion problem. ... Table 5: Experimental results of Algorithm 1 for the weighted low-rank approximation. ... Table 6: Experimental results of Algorithm 2. |
| Researcher Affiliation | Academia | Zhao Song Simons Institute for the Theory of Computing, UC Berkeley EMAIL Mingquan Ye University of Illinois Chicago EMAIL Junze Yin Rice University EMAIL Lichen Zhang MIT CSAIL EMAIL |
| Pseudocode | Yes | Algorithm 1 Main Algorithm. The CLIP procedure zeros out rows whose ℓ2 norm are large, and the QR procedure computes the QR decomposition of the matrix and outputs the orthonormal factor Q. Algorithm 2 High precision solver. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code nor provides a link to a code repository. It does not mention code in supplementary materials or via an anonymous review link. |
| Open Datasets | No | The paper describes generating synthetic datasets for experiments: "For each row of the weight matrix W Rn n 0 , we randomly select 400 entries to be equal to 1 and the remaining 400 entries to be 0. The second experiment is the general weighted low rank approximation where the weight matrix W is constructed via 1n1 n + G for G being a random matrix with standard Gaussian entries. ... We generate the ground truth M = XY by a pair X, Y Rn k with random i.i.d. entries scaled by 1/k, and the distributions are Laplace, Gaussian, and uniform, respectively." No publicly available datasets are explicitly used or referenced. |
| Dataset Splits | No | The paper describes the generation of synthetic data but does not mention explicit training, test, or validation splits. It states: "We set n = 800 and k = 100. We generate the noise matrix N as an n n random matrix with i.i.d. Gaussian entries of zero mean and variance 1/k." The data is generated rather than split from a pre-existing dataset. |
| Hardware Specification | No | The paper mentions running experiments but does not provide specific hardware details such as GPU models, CPU types, or memory specifications. For example, it does not state "NVIDIA A100" or "Intel Xeon E5". |
| Software Dependencies | No | The paper does not specify any software dependencies or their version numbers. For instance, it does not mention "Python 3.8" or "PyTorch 1.9". |
| Experiment Setup | Yes | In both experiments, we set M = M + N Rn n where M is the rank-k ground truth and N is a higher-rank noise matrix. We set n = 800 and k = 100. We generate the noise matrix N as an n n random matrix with i.i.d. Gaussian entries of zero mean and variance 1/k. We apply the sketching matrix S Rm n with m = 150 (we choose the Count Sketch matrix (Charikar et al., 2002) when solving for the regression problems (see Lines 7 and 10 from our Algorithm 1). We iterate the alternating minimization steps (see Line 6) for T = 20 times. |