Interpretable Unsupervised Joint Denoising and Enhancement for Real-World low-light Scenarios

Authors: Li Huaqiu, HuXiaowan, Haoqian Wang

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority of our method. Code will be available at https://github.com/huaqlili/ unsupervised-light-enhance-ICLR2025. [...] Extensive experiments on multiple real-world datasets demonstrate that our method achieves superior performance across several metrics compared to SOTA approaches. [...] We conducted tests on four benchmarks: LOLv1 Wei et al. (2018), LOLv2-real Yang et al. (2021), SICE Cai et al. (2018) and SIDD Abdelhamed et al. (2018). [...] The experimental results on the LOL dataset are presented in Tab. 1, where our model outperforms most of the compared unpaired and no-reference methods, achieving the highest scores across multiple metrics. [...] The experimental results on the SICE and SIDD datasets are shown in Tab. 2. [...] Table 3: Ablation study of the contribution of the three physical priors. [...] Table 4: Ablation study of the contribution of the denoising designs
Researcher Affiliation Academia Huaqiu Li, Xiaowan Hu, Haoqian Wang Tsinghua Shenzhen International Graduate School Tsinghua University EMAIL
Pseudocode No The paper describes methods using mathematical equations and architectural diagrams (e.g., Figure 2), but does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No Code will be available at https://github.com/huaqlili/ unsupervised-light-enhance-ICLR2025.
Open Datasets Yes We conducted tests on four benchmarks: LOLv1 Wei et al. (2018), LOLv2-real Yang et al. (2021), SICE Cai et al. (2018) and SIDD Abdelhamed et al. (2018).
Dataset Splits No Please refer to the supplementary materials for detailed information regarding the datasets, including the corresponding training and testing splits.
Hardware Specification Yes We consistently set the initial learning rate to 1 10 5 and conducted all experiments on an RTX 3090 GPU.
Software Dependencies No The paper mentions implementing the method and training, but does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries used.
Experiment Setup Yes To ensure fairness, all experiments were terminated after 100 training epochs. We consistently set the initial learning rate to 1 10 5 and conducted all experiments on an RTX 3090 GPU. During training, images were randomly cropped into 256x256 patches, with pixel values normalized to the range of (0, 1), and a batch size of 1 was employed. [...] Therefore, during each iteration, we randomly sample enhancement factors within the range of (1.3, 1.7) to provide the model with a broader range of feature processing options.