Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

DPLUT: Unsupervised Low-light Image Enhancement with Lookup Tables and Diffusion Priors

Authors: Yunlong Lin, Zhenqi Fu, Kairun Wen, Tian Ye, Sixiang Chen, Ge Meng, Yingying Wang, Chui Kong, Yue Huang, Xiaotong Tu, Xinghao Ding

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our approach outperforms state-of-the-art methods in terms of visual quality and efficiency. Extensive evaluations on three benchmark datasets show that DPLUT achieves state-of-the-art performance and can enhance 4K low-light images in real-time. (Table 1: Quantitative comparison on LOL, SICE and LSRW, Table 2: The runtime (ms) comparison at different resolutions, Ablation study)
Researcher Affiliation Academia 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 2School of Informatics, Xiamen University, China 3Tsinghua University, China 4The Hong Kong University of Science and Technology (Guangzhou), China 5Fudan University, China EMAIL
Pseudocode No The paper describes the methodology using text and mathematical equations (e.g., Eq. 1, Eq. 2, Eq. 5, Eq. 6, Eq. 7, Eq. 8, Eq. 9, Eq. 10, Eq. 11, Eq. 12, Eq. 13, Eq. 14), but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly provide a link to a code repository or an affirmative statement about making their source code available. It only mentions that 'The results of all those methods are reproduced by using the official codes with recommended parameters' when referring to state-of-the-art methods being compared against, not their own.
Open Datasets Yes In order to validate the effectiveness of the proposed method, we use low-light images from LOL (Wei et al. 2018) and SICE-Part2 (Cai, Gu, and Zhang 2018) to train and test the network. For a more convincing comparison, we further extend evaluations on the LSRW dataset (Hai et al. 2023).
Dataset Splits Yes The LOL dataset is officially divided into two parts, i.e., 485 low-light images for training and 15 low-light images for testing. SICE consists of 224 normal-light images and 783 low-light images... We use the first 50 normal-light images and corresponding 150 low-light images for testing and the rest 633 low-light images for training. For a more convincing comparison, we further extend evaluations on the LSRW dataset (Hai et al. 2023), which includes 1000 pairs for training and 50 ones for testing.
Hardware Specification Yes All experiments are conducted on a single Titan RTX GPU
Software Dependencies No All experiments are conducted on a single Titan RTX GPU, and the Py Torch framework is used to construct our networks. We employ an Adam optimizer with β1 = 0.9 and β2 = 0.99, batch size is set to 1. The training iterations of LLUT and NLTU are set to 200 and 300, respectively. The learning rates of LLUT and NLUT are 1e 4 and 1e 5, respectively. While PyTorch is mentioned, no specific version number is provided.
Experiment Setup Yes We employ an Adam optimizer with β1 = 0.9 and β2 = 0.99, batch size is set to 1. The training iterations of LLUT and NLTU are set to 200 and 300, respectively. The learning rates of LLUT and NLUT are 1e 4 and 1e 5, respectively. The total number of curve steps for illumination enhancement is set to n = 8. We utilize the pre-trained diffusion model on Image Net and employ the implicit sampling strategy (DDIM). The total number of DDIM iteration steps is set to 100. We select the final 4 steps to implement the noise addition and removal process. The sizes of LLUT and NLUT are set to 9 and 17, respectively. (Equation 10) L = λ1Le + Lp + λ2Lc + λ3Ls, where λ1, λ2 and λ3 are the weights of the losses, which are empirically set to 10, 5 and 1600 in all experiments. We set v = 0.65 in our experiments.