Preconditioned Riemannian Gradient Descent Algorithm for Low-Multilinear-Rank Tensor Completion

Authors: Yuanwei Zhang, Fengmiao Bian, Xiaoqun Zhang, Jian-Feng Cai

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical results highlight the efficiency of PRGD, outperforming state-of-the-art methods on both synthetic data and real-world video inpainting tasks. Code is available at https://github.com/Jiushanqing0418/PRGD-Tucker. 5. Numerical Experiments In this section, we proposed several numerical comparisons of our proposed PRGD algorithm with state-of-the-art algorithms that include RGD (Cai et al., 2020; Wang et al., 2023), for demonstrating the effectiveness of precondition, and Scaled GD (Tong et al., 2022), for comparisons with factorization based algorithms.
Researcher Affiliation Academia 1School of Mathematical Sciences and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, China. 2Department of Mathematics, The Hong Kong University of Science and Technology, Hong Kong, China. 3Shanghai Artificial Intelligence Laboratory, Shanghai, China. Correspondence to: Jian-Feng Cai <EMAIL>, Xiaoqun Zhang <EMAIL>.
Pseudocode Yes Algorithm 1 Preconditioned RGD... Algorithm 2 Spectral Initialization with Trimming
Open Source Code Yes Code is available at https://github.com/Jiushanqing0418/PRGD-Tucker.
Open Datasets Yes We evaluate the performance of our proposed algorithm on the Tomato video from (Liu et al., 2012), as well as the Akiyo and Hall Monitor videos from the YUV video dataset1. 1http://trace.eas.asu.edu/yuv/
Dataset Splits No The paper describes reconstructing a tensor from partially observed entries. While a sampling ratio of '10% of the pixels are observed' is mentioned for video inpainting, this refers to the observation mask for the completion problem, not a conventional training/validation/test split for a predictive model.
Hardware Specification Yes All simulations are performed in MATLAB r2023b with a 2.6GHZ Intel Xeon ICX Platinum 8358 CPU.
Software Dependencies Yes All simulations are performed in MATLAB r2023b with a 2.6GHZ Intel Xeon ICX Platinum 8358 CPU.
Experiment Setup Yes For our PRGD algorithm, we keep the hyperparamter ϵt a constant chosen from {10 3, 5 10 4, 10 4, 5 10 5}. Since no step size strategy is proposed for the Scaled GD algorithm in (Tong et al., 2022), to make a fair comparison, we tune the optimal constant step size for all tested algorithms and report the results. All simulations are performed in MATLAB r2023b with a 2.6GHZ Intel Xeon ICX Platinum 8358 CPU. ... We count the iteration numbers and runtimes of the tested algorithm until the relative error is less than 10 4. For each parameter setting, we perform five random trials and report the average results in Figure 2 and Figure 3.