EvHDR-GS: Event-guided HDR Video Reconstruction with 3D Gaussian Splatting
Authors: Zehao Chen, Zhan Lu, De Ma, Huajin Tang, Xudong Jiang, Qian Zheng, Gang Pan
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic and real-world datasets demonstrate that the proposed method achieves state-of-the-art performance. Experiments Experimental Setups Compared methods. We compare the proposed Ev HDRGS to several state-of-the-art HDR imaging methods, including four frame-based HDR image reconstruction methods (Liu et al. 2020; Chung and Cho 2023; Xu et al. 2024; Cui et al. 2024a), a colored variant of an event-based HDR video reconstruction method (Rebecq et al. 2019b), and an event-guided HDR video reconstruction method (Yang et al. 2023). We report quantitative results in Table 1. Evaluation on synthetic data. Evaluation on real-world data. |
| Researcher Affiliation | Academia | 1 The State Key Lab of Brain-Machine Intelligence, Zhejiang University 2 College of Computer Science and Technology, Zhejiang University 3 School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore EMAIL , EMAIL |
| Pseudocode | No | The paper describes the proposed method and its components using text and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code for the methodology described in this paper, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | Our synthetic data comes from two components: one part consists of static HDR videos from the Deep HDRVideo-Dataset (Chen et al. 2021a), and the other part is rendered using Blender files published by HDRNe RF (Huang et al. 2022) and HDR-plenoxels (Jun-Seong et al. 2022). |
| Dataset Splits | No | The paper mentions the synthetic data comprises "10 scenes, including indoor and outdoor environments, with 2,000 HDR frames" and states "We collected real-world data using the DAVIS 346C sensor under 4 scenes." However, it does not provide specific percentages, sample counts, or a detailed methodology for splitting these datasets into training, validation, or test sets. |
| Hardware Specification | Yes | All experiments are conducted on one single NVIDIA A5000. |
| Software Dependencies | No | We implement our method based on 3DGS (Kerbl et al. 2023) and add HDR outputs in the SH module for the original 3DGS. We adopt the same hyperparameter settings as described in 3DGS. |
| Experiment Setup | Yes | We adopt the same hyperparameter settings as described in 3DGS. We set the learning rate of this term a to 0.001. ... λ is a hyperparameter which is set as 0.2 across all evaluations. ... µ is the compression level set at 5000, and E is the scaled HDR pixel value within a range of [0, 1]. |