Beyond Low-rankness: Guaranteed Matrix Recovery via Modified Nuclear Norm

Authors: Jiangjun Peng, Yisi Luo, Xiangyong Cao, Shuang Xu, Deyu Meng

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of our method. Code and supplementary material are available at https://github. com/andrew-pengjj/modified nuclear norm.
Researcher Affiliation Academia Jiangjun Peng1,2 , Yisi Luo3 , Xiangyong Cao4 , Shuang Xu1,2 , Deyu Meng3 1 School of Mathematics and Statistics, Northwestern Polytechnical University, Xi an 710129, China 2 Shenzhen Research Institute of Northwestern Polytechnical University, Shenzhen 518057, China 3 School of Mathematics and Statistics and Ministry of Education Key Lab of Intelligent Networks and Network Security, Xi an Jiaotong University, Xi an 710049, China 4 School of Computer Science and Technology, Xi an Jiaotong University, Xi an 710049, China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the optimization process using gradient descent in Section 2.3 but does not provide a structured pseudocode or algorithm block.
Open Source Code Yes Code and supplementary material are available at https://github. com/andrew-pengjj/modified nuclear norm.
Open Datasets Yes We selected four commonly used low-rank image datasets, which are HSI data used in [Wang et al., 2023], MSI 2, color video sequences 3, and MRI and CT images 4. Among them, hyperspectral images, multispectral images, color video sequences, and MRI and CT images contain 5, 11, 10, and 4 images, respectively. 2https://www.cs.columbia.edu/CAVE/databases/multispectral/ 3http://trace.eas.asu.edu/yuv/ 4https://www.cancerimagingarchive.net/
Dataset Splits No The paper describes how observed data (M) is generated from ground truth (X0, S0) with random noise or missing values for Robust PCA and Matrix Completion tasks (e.g., "the support set Ωis chosen randomly"). It also varies parameters like rank, sparsity, and missing ratio for evaluation. However, it does not specify explicit training/validation/test splits of the datasets themselves in the conventional sense required for supervised learning reproducibility.
Hardware Specification Yes All simulations are run on a PC with an Intel Core i5-10600KF CPU (4.10 GHz), 32 GB RAM, and a Ge Force RTX 3080 GPU.
Software Dependencies No The paper mentions the ADMM algorithm but does not list any specific software libraries, frameworks, or their version numbers used in the implementation.
Experiment Setup Yes In all experiments, we set h = w = 50, n1 = hw = 2500, and n2 = 100. We evaluate how the rank r, sparsity ρs (RPCA), and missing ratio ρ (MC) affect performance by varying ρs in (0.01, 0.5) (step 0.01), ρ in (0.01, 0.99) (step 0.02), and r in (1, 50) (step 1). Following Corollary 2.3, we set λ = 1/ pmax{n1, n2} for the TRPCA task and µ = ( n1 + n2) pσ, σ = 1e 4 for the MC noiseless task. ... under a learning rate of 1e 4, NN and all MNN variants converge stably.