APR-RD: Complemental Two Steps for Self-Supervised Real Image Denoising
Authors: Hyunjun Kim, Nam Ik Cho
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that our proposed method outperforms the existing state-of-the-art self-supervised denoising methods in real s RGB space. Our experiment was conducted in two phases: (1) Training of APR and (2) Training of RD. Quantitative and Qualitative Results. We quantitatively compared our method with various non-learning, supervised (Yue et al. 2019; Zamir et al. 2022), unpaired (Chen et al. 2018; Jang et al. 2021), and self-supervised (Neshatavar et al. 2022; Jang et al. 2023; Li et al. 2023) methods based on PSNR and SSIM metrics. This comparison is presented in Table 1. For qualitative comparison, we include result images of each self-supervised method. These are shown in Figure 7. Ablation study of the regularization factor on the SIDD validation. Ablation study of TRD on the SIDD validation and the DND benchmark. |
| Researcher Affiliation | Academia | Hyunjun Kim1, Nam Ik Cho1,2* 1Department of ECE, INMC, Seoul National University, Seoul, Korea 2IPAI, Seoul National University, Seoul, Korea EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methods textually and with figures illustrating processes (Figure 2, 3, 5, 6) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Project Page https://github.com/HYK2017/APRRD |
| Open Datasets | Yes | Datasets for Training and Evaluation. We trained and evaluated our method using the SIDD and DND (Plotz and Roth 2017) benchmark, which are real noise datasets obtained from actual camera pipelines. SIDD consists of SIDD medium, validation, and benchmark. |
| Dataset Splits | Yes | We trained the network on the SIDD medium dataset and evaluated it on the other datasets. SIDD validation was evaluated offline using the provided GT. The benchmark results were submitted to their official websites for evaluation. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers. |
| Experiment Setup | Yes | Training Settings of APR (BSN). We optimize APR using the Adam optimizer, with the values of β1 and β2 set to 0.9 and 0.999, respectively. The initial learning rate of 0.0003 decreases to zero over 400,000 iterations using a cosine scheduler. Training Settings of RD (NBSN). This NBSN uses the same optimizer settings as BSN, and the initial learning rate of 0.0003 decreases to zero over 200,000 iterations using a cosine scheduler. Ablations on Regularization Factor. NBR2NBR has already reported the trade-off between accurate GT estimation and noise contamination depending on the value of λ. However, since we use a different network, sampling method, and task domain compared to theirs, we investigated the optimal value of λ for our setup. Table 2 demonstrates that the optimization of simple reconstruction (λ = 0) does not yield optimal performance due to GT distortion. It is observed that optimal performance is achieved when λ = 4. After this point, performance gradually declines due to increased noise interference. |