Fast Tensor Completion via Approximate Richardson Iteration

Authors: Mehrdad Ghadiri, Matthew Fahrbach, Yunbum Kook, Ali Jadbabaie

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyze the convergence rate of our approximate Richardson iteration-based algorithm, and our empirical study shows that it can be 100x faster than direct methods for CP completion on real-world tensors.
Researcher Affiliation Collaboration 1MIT 2Google Research 3Georgia Tech. Correspondence to: Mehrdad Ghadiri <EMAIL>.
Pseudocode Yes Algorithm 1: approx-mini-als
Open Source Code Yes The code is available at https://github.com/fahrbach/fast-tensor-completion.
Open Datasets Yes HYPERSPECTRAL is 1024 × 1344 × 33 tensor of time-lapse hyperspectral radiance images (Nascimento et al., 2016).
Dataset Splits Yes For a given tensor X and sample ratio p ∈ [0, 1], let XΩbe a partially observed tensor with a random p fraction of entries revealed. We fit XΩwith a rank-R CP decomposition by minimizing the training relative reconstruction error (RRE) (b X − X)Ω F/ XΩ F using different ALS algorithms. ...the test RRE b X − X F/ X F (dashed line) is the loss on the entire (mostly unobserved) tensor
Hardware Specification Yes All experiments are implemented with NumPy (Harris et al., 2020) and Tensorly (Kossaifi et al., 2019) on an Apple M2 chip with 8 GB of RAM.
Software Dependencies Yes All experiments are implemented with NumPy (Harris et al., 2020) and Tensorly (Kossaifi et al., 2019) on an Apple M2 chip with 8 GB of RAM.
Experiment Setup Yes For these experiments, we initialize X = Y = I. ...we set p = 0.1 and sweep over the rank R using direct to demonstrate how the train RRE (solid line) and test RRE b X − X F/ X F (dashed line) decrease as a function of R and the ALS step count. In the second column, we fix the parameters (p, R) = (0.1, 16)