Efficient Differentiable Approximation of Generalized Low-rank Regularization
Authors: Naiqi Li, Yuqiu Xie, Peiyuan Liu, Tao Dai, Yong Jiang, Shu-Tao Xia
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experimental study, the proposed method is applied to various tasks, which demonstrates its versatility and efficiency. Code is available at https: //github.com/naiqili/EDLRR. 4 Experimental Results In this section, we perform various experiments to demonstrate the versatility, convenience, as well as efficiency of our method. We first examine two classic LRR tasks, i.e., matrix completion and video fore-background separation. One advantage of our proposed method is to conveniently introduce LRR terms into any loss function, particularly deep neural networks. So we further exploit this property in DNNbased image denoising. Experiments about convergence and parameter sensitivity is deferred to Appendix due to space constraints. All experiments were conducted on a machine equipped with 3080Ti GPU. |
| Researcher Affiliation | Academia | Naiqi Li1 , Yuqiu Xie1 , Peiyuan Liu1 , Tao Dai2, , Yong Jiang1, and Shu-Tao Xia1 1Tsinghua Shenzhen International Graduate School 2Shenzhen University EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 Differentiable approximation of S p p Require: S Rm m, p, sample sizes (N), iteration steps for projection and matrix pseudo inverse (k1 and k2) Ensure: Approximation of S p p 1: res 0 2: for i in 1, 2, ..., N do 3: Sample gi N(0, I) 4: v Approx Project(S, gi, k1) {Approx. PS[gi]} 5: M Approx Root(SS , p, k2) {Approx. (SS ) p 2 } 6: res res + v Mgi 7: end for 8: return res/N |
| Open Source Code | Yes | Code is available at https: //github.com/naiqili/EDLRR. |
| Open Datasets | Yes | The datasets in our experiments were the Berkeley segmentation dataset (BSD68) and the Set12 dataset, which were consistent with previous studies. |
| Dataset Splits | No | The paper mentions using a "training set of 400 images" and dividing images into "patches of size 40x40" for reconstruction loss. It also describes different types of data corruption for matrix completion experiments (e.g., "drop 20%", "block", "text"). However, it does not provide explicit train/validation/test splits (e.g., percentages, sample counts, or specific predefined splits with citations) for the mentioned datasets (BSD68, Set12) that would allow for reproducible data partitioning. |
| Hardware Specification | Yes | All experiments were conducted on a machine equipped with 3080Ti GPU. |
| Software Dependencies | No | The paper mentions "popular deep learning libraries (e.g., PyTorch and TensorFlow)" in the introduction and "deep learning libraries" in the conclusion. However, it does not specify any version numbers for these or any other software components used in their experiments, which is required for reproducible software dependencies. |
| Experiment Setup | Yes | Following the experimental configuration of [Zhang et al., 2017], we conducted our experiments using a training set of 400 images with dimensions of 180 180. Gaussian noise levels were set at σ = 15, 25, 50. Since the low-rank structure is only applicable to the entire image, we utilized the full image as input when calculating the regularization loss. For the reconstruction loss, the settings remained unchanged, with the images divided into patches of size 40 40. |