Flexible Generalized Low-Rank Regularizer for Tensor RPCA

Authors: Zhiyang Gong, Jie Yu, Yutao Hu, Yulong Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present several real-world experiments to substantiate the effectiveness of our models. Additional results are provided in the supplementary material. Evaluation metrics. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are used to evaluate the recovery performance.
Researcher Affiliation Academia Zhiyang Gong1 , Jie Yu1 , Yutao Hu1 , Yulong Wang1,2,3 1 College of Informatics, Huazhong Agricultural University, China 2 Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, China 3 Key Laboratory of Smart Farming Technology for Agricultural Animals, Ministry of Agriculture and Rural Affairs, China
Pseudocode Yes Algorithm 1 FGTRPCA algorithm Input: Observation tensor data M Rd1 d2 d3, and the parameter ̸. 1: Initialize L0 = E0 = Z0 = 0, W0 = 1d3 d, W0 = 1d1 d2 d3, 0 = 10 2, ̸ = 1.1, ͨ = 10 6, and t = 0. 2: while not converge do 3: Update the low-rank tensor L by Eq. (17). 4: Update the sparse tensor E by Eq. (21). 5: Update the weights W and W by Eq. (24). 6: Update the Lagrangian multiplier Z by Eq. (25). 7: Update the parameter  by Eq. (26). 8: Check the convergence condition in Eq. (27). 9: end while Output: L = Lt+1, E = Et+1
Open Source Code No The paper does not provide an explicit statement about the availability of source code or a link to a code repository for the described methodology.
Open Datasets Yes Datasets. For comprehensive comparison, we use 4 widely used tensor data types including color images, grayscale videos, hyperspectral images (HSIs), and multispectral images (MSIs). For color images, we choose 3 widely used datasets including Berkeley Segmentation Dataset1 (BSD) [Martin et al., 2001], Kodak [Kodak, 1993] dataset2, and Zhe Jiang University (ZJU) [Hu et al., 2012] dataset3. For grayscale videos, we use 14 grayscale video sequences from the YUV dataset4 and select the first 100 frames for each sequence. For HSIs, we utilize Cuprite5, DCMall5, Urban5, In-dian Pines5, and Pavia University5 (Pavia U) with their first 50 bands from each HSI dataset for experiments. For MSIs, we randomly select 4 MSIs from the CAVE dataset [Yasuma et al., 2008].
Dataset Splits No Noising Data Construction. For each color channel of the color image, each frame of the grayscale video, and each band of HSI and MSI, we add random salt and pepper noise at varying noise ratios of 10%, 20%, and 30%. The paper describes the data construction process for adding noise but does not specify traditional training/test/validation dataset splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or processing units used for conducting the experiments.
Software Dependencies No The paper does not explicitly state any software dependencies with version numbers used for the implementation or experiments.
Experiment Setup Yes For the key parameter ̸ in our models, we search from a candidate set and employ ̸ = 0.5. More detailed parameter settings are described in the supplementary material. Baselines. Our baselines are divided into two categories based on different priors. (1) Low-rankness: TRPCA [Lu et al., 2020], ETRPCA [Gao et al., 2020], and PTRPCA [Yan and Guo, 2024]; (2) Joint Low-rankness & Smoothness: t CTV [Wang et al., 2023a] and RTCTV [Huang et al., 2024]. We utilize the parameters recommended by the authors.