Enhancing Low-Light Images: A Synthetic Data Perspective on Practical and Generalizable Solutions

Authors: Yu Long, Qinghua Lin, Zhihua Wang, Kai Zhang, Jianguo Zhang, Yuming Fang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across various datasets demonstrate that our synthetic data can indeed effectively enhance existing LLIE deep models, improving both their practicality and generalizability. ... Experiment and Results ... Quantitative Comparison. ... Qualitative Comparison. ... Ablation Study.
Researcher Affiliation Academia Yu Long1,3, Qinghua Lin2,3, Zhihua Wang3*, Kai Zhang4, Jianguo Zhang5, 6, Yuming Fang7 1School of Computer Science, Beijing Institute of Technology 2School of Computer Science, Guangdong University of Technology 3Department of Engineering, Shenzhen MSU-BIT University 4School of Intelligence Science and Technology, Nanjing University, Suzhou 5Department of Computer Science and Engineering, Southern University of Science and Technology 6Pengcheng Laboratory 7School of Information Management, Jiangxi University of Finance and Economics
Pseudocode No The paper describes methods textually (e.g., in the 'Noise Degradation Synthesis' or 'Low-light Synthesis' sections) but does not include any clearly labeled pseudocode or algorithm blocks with structured, step-by-step formatting.
Open Source Code Yes code https://github.com/Long Yu-LY/Syn LLIE
Open Datasets Yes We utilize a combination of Image Net (Deng et al. 2009) and COCO (Lin et al. 2014), comprising more than one million images, as normal-light images to synthesize low-light images. The performance assessment is carried out using three well-established paired datasets: LOL-v1 (Wei et al. 2018), LOL-v2 (Yang, Nie, and Liu 2019), and the MIT-Adobe Five K Dataset (Bychkovsky et al. 2011), as well as four real-captured unpaired benchmarks: i.e., NPE (Wang et al. 2013), DICM (Lee, Lee, and Kim 2013), LIME (Guo, Li, and Ling 2017) and VV datasets. ... We utilize the DARK FACE dataset (Yang et al. 2020a) to evaluate the high-level benefits of LLIE methods trained on LOL-v2 and our synthetic data.
Dataset Splits Yes We present visual comparisons of our method against competitive approaches on paired datasets in Figure 3. The images in the top and bottom rows are selected from the LOL-v1 and LOL-v2 test sets, respectively. ... Additionally, we provide results on unpaired benchmarks in Figure 4, with images in the top and bottom rows selected from the DICM and LIME datasets, respectively. ... We utilize the DARK FACE dataset (Yang et al. 2020a) to evaluate the high-level benefits of LLIE methods trained on LOL-v2 and our synthetic data.
Hardware Specification Yes All models are trained on two NVIDIA RTX 4090 GPUs.
Software Dependencies No The paper mentions software components like "YOLO-v5s" and types of losses ("L1 loss, SSIM, VGG perceptual loss, and UNet GAN loss"), but it does not specify version numbers for these or any underlying frameworks (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup No The paper specifies that models are "trained using a weighted combination of losses, including L1 loss, SSIM (Wang et al. 2004), VGG perceptual loss (Johnson, Alahi, and Fei-Fei 2016), and UNet GAN loss (Wang et al. 2022b)", "randomly crop image patches of size 156 x 156 x 3 pixels", and adopt "a two-stage training strategy: an initial pre-training phase followed by fine-tuning on the target sets". However, it does not provide concrete hyperparameter values such as learning rate, batch size, or number of epochs.