Continuous Exposure Learning for Low-light Image Enhancement using Neural ODEs

Authors: Donggoo Jung, Daehyun Kim, Tae Hyun Kim

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental First, we quantitatively compare the performance of low-light image enhancement on different datasets. Notably, in the experimental results, CLODE represents our proposed method without requiring additional user input (by default), while CLODE represents the result of adjusting the final state T to the user s preferred level, as introduced in Sec. 3.3. In Table 1, we compare the low-light image enhancement performance on the LSRW (Hai et al., 2023) and LOL (Chen Wei, 2018) benchmark datasets in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The term "GT Mean" refers to the evaluation method used by Kin D (Zhang et al., 2019) and LLFlow (Wang et al., 2022b), which matches the average value of the output pixels to that of the ground truth pixels.
Researcher Affiliation Academia Donggoo Jung 1, Daehyun Kim 1, Tae Hyun Kim2 Dept. of Artificial Intelligence1, Dept. of Computer Science2, Hanyang University EMAIL
Pseudocode Yes CLODE (dopri5) uses an early stop mechanism. It tracks error at each state, terminating when the error is within allowable error rate. For dopri5, k-order solutions (k=5) are used to calculate error (Γt) as follows: Γt = atol + rtol norm(|OK t OK 1 t |), (18) where the k-order solution at time t is denoted as OK and the (k 1)-order solution is denoted as OK 1 t . atol is absolute tolerance, and rtol is relative tolerate, and the norm being used is a mixed L-infinity/RMS norm. If |OK t OK 1 t | > Γt the step size is re-adjusted, or it s within Γt, the solution is deemed optimal, and the process terminates. ODE solvers are designed to find optimal solutions through iterative steps.
Open Source Code Yes Code is available at https://github.com/dgjung0220/CLODE.
Open Datasets Yes In this work, we use the LOL (Chen Wei, 2018) and SICE (Cai et al., 2018) Part1 datasets for training. The results of low-light image enhancement are evaluated on the LOL and LSRW (Hai et al., 2023) benchmark datasets. In addition, the SICE (Cai et al., 2018) Part2 dataset is used as a benchmark dataset for evaluation under various exposure conditions. SICE Part2 contains 229 image sequences with different exposure levels, and we use the entire sequences as the evaluation dataset.
Dataset Splits Yes In this work, we use the LOL (Chen Wei, 2018) and SICE (Cai et al., 2018) Part1 datasets for training. The results of low-light image enhancement are evaluated on the LOL and LSRW (Hai et al., 2023) benchmark datasets. In addition, the SICE (Cai et al., 2018) Part2 dataset is used as a benchmark dataset for evaluation under various exposure conditions. SICE Part2 contains 229 image sequences with different exposure levels, and we use the entire sequences as the evaluation dataset.
Hardware Specification Yes The training set of images is resized to 128x128, we employ the Pytorch framework on NVIDIA A6000 GPU with a batch size of 8. The ADAM optimizer is used with default parameters and a fixed learning rate of 1e 5 to optimize the parameters of our network. The weights for the loss function wcol, wparam, wspa, wexp and wnoise are set to 20, 200, 1, 10 and 1 respectively, to balance the scale of losses. Furthermore, we adopt torchdiffeq (Chen, 2018) for Neural ODEs implementation. The training process is conducted for 100 epochs. Table 6 presents the PSNR/SSIM performance, parameter count, and execution time measured on LSRW (Hai et al., 2023) using an NVIDIA RTX 4090.
Software Dependencies No The training set of images is resized to 128x128, we employ the Pytorch framework on NVIDIA A6000 GPU with a batch size of 8. The ADAM optimizer is used with default parameters and a fixed learning rate of 1e 5 to optimize the parameters of our network. The weights for the loss function wcol, wparam, wspa, wexp and wnoise are set to 20, 200, 1, 10 and 1 respectively, to balance the scale of losses. Furthermore, we adopt torchdiffeq (Chen, 2018) for Neural ODEs implementation. The training process is conducted for 100 epochs. Explanation: The paper mentions "Pytorch framework" and "torchdiffeq (Chen, 2018)" but does not provide specific version numbers for these software components.
Experiment Setup Yes The training set of images is resized to 128x128, we employ the Pytorch framework on NVIDIA A6000 GPU with a batch size of 8. The ADAM optimizer is used with default parameters and a fixed learning rate of 1e 5 to optimize the parameters of our network. The weights for the loss function wcol, wparam, wspa, wexp and wnoise are set to 20, 200, 1, 10 and 1 respectively, to balance the scale of losses. Furthermore, we adopt torchdiffeq (Chen, 2018) for Neural ODEs implementation. The training process is conducted for 100 epochs. ... The maximum allowed step for the adaptive solver is set to 30. In Eq. 18, the relative and absolute tolerances for the error rate calculation are set uniformly to 1e 5. We set both atol and rtol to 1e-5.