Low-Light Video Enhancement via Spatial-Temporal Consistent Decomposition
Authors: Xiaogang Xu, Kun Zhou, Tao Hu, Jiafei Wu, Ruixing Wang, Hao Peng, Bei Yu
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on widely recognized LLVE benchmarks, covering diverse scenarios. Our framework consistently outperforms existing methods, establishing a new SOTA performance. Our results, both quantitative and qualitative, consistently demonstrate the effectiveness and state-of-the-art (SOTA) performance of our framework, as shown in Fig. 1. Furthermore, we conducted a large-scale user study involving 100 participants, which showcased the superiority of our results in terms of human subjective perceptions. |
| Researcher Affiliation | Collaboration | Xiaogang Xu1,2 , Kun Zhou3 , Tao Hu4 , Jiafei Wu5 , Ruixing Wang6 , Hao Peng7 , Bei Yu1 1The Chinese University of Hong Kong 2Zhejiang University 3The Chinese University of Hong Kong (Shenzhen) 4PICO, Bytedance 5The University of Hong Kong 6DJI Technology Co., Ltd. 7Zhejiang Normal University |
| Pseudocode | No | The paper describes the methodology using natural language and mathematical formulations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Our evaluation is conducted on four publicly available datasets, which encompass a wide range of real-world videos with diverse motion patterns and degradations, including SMID [Chen et al., 2019], SDSD [Wang et al., 2021], DID [Fu et al., 2023a], and DAVIS [Pont-Tuset et al., 2017]. |
| Dataset Splits | No | The paper mentions 'all baselines are trained on our uniļ¬ed data split for a fair comparison' but does not provide specific details on the split percentages, sample counts, or methodology for creating these splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using the DKM method for computing correspondences and states that 'pre-trained weights of DKM' were used, but it does not specify version numbers for DKM or any other software dependencies like programming languages or deep learning frameworks. |
| Experiment Setup | Yes | We conducted experiments on all datasets using the same network structure. T is set as 5 in the experiment. All modules were trained end-to-end, with the learning rate initialized at 4e-4 for all layers, adapted by a cosine learning scheduler. The batch size used was 4. |