AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement
Authors: Yunlong Lin, Tian Ye, Sixiang Chen, Zhenqi Fu, Yingying Wang, Wenhao Chai, Zhaohu Xing, Wenxue Li, Lei Zhu, Xinghao Ding
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our approach outperforms the current leading unsupervised LIE methods across benchmarks in terms of distortion-based and perceptual-based metrics, and it performs well even in sophisticated wild degradation. |
| Researcher Affiliation | Academia | 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 2School of Informatics,Xiamen University, China 3The Hong Kong University of Science and Technology (Guangzhou), China 4Tsinghua University, China 5University of Washington, USA 6The Hong Kong University of Science and Technology, Hong Kong SAR, China EMAIL |
| Pseudocode | Yes | Algorithm 1: Sampling with attribute guidance |
| Open Source Code | No | The paper does not provide an unambiguous statement about releasing source code, nor does it include a link to a code repository. |
| Open Datasets | Yes | Testing Datasets. We construct one synthetic dataset and seven real-world datasets for testing. The LOLv1 (Wei et al. 2018) dataset... The LOLv2-synthetic (Wei et al. 2018) dataset... The SICE benchmark... Moreover, we further assess our method on five commonly used real-world unpaired benchmarks: LIME (Li et al. 2018), NPE (Wang et al. 2013), MEF (Ma, Zeng, and Wang 2015), DICM (Lee, Lee, and Kim 2012), and VV (Vonikakis, Kouskouridas, and Gasteratos 2017). |
| Dataset Splits | Yes | The LOLv1 (Wei et al. 2018) dataset is composed of 500 low-light and normal-light image pairs and divided into 485 training pairs and 15 testing pairs. The LOLv2-synthetic (Wei et al. 2018) dataset is officially divided into two parts, i.e., 900 low-light images for training and 100 low-light images for testing. The SICE benchmark collects 224 normal-light images and 783 lowlight images. Each normal-light image corresponds to 2 4 low-light images. We adopt the first 50 normal-light images and the corresponding 150 low-light images for testing, and the rest (633 low-light images) for training. |
| Hardware Specification | Yes | The inference process is carried out on the NVIDIA RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions using a 'pre-trained diffusion model' and 'Image Net dataset' but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, CUDA 11.x). |
| Experiment Setup | Yes | L = λ1L1 + λ2L2 + λ3L3, (11) where λ1, λ2 and λ3 are constants controlling the relative importance of the different losses, which are empirically set to 1000, 10 and 0.03 in all experiments, respectively. ... B and A are empirically set to 0.46 and 0.25, respectively. ... s and N are empirically set to 1.8 and 3 in all experiments, respectively. ... The total number of iteration steps is defaulted to 1000. We select the final 10 steps to implement the noise addition and attribute guidance. |