EDyGS: Event Enhanced Dynamic 3D Radiance Fields from Blurry Monocular Video
Authors: Mengxu Lu, Zehao Chen, Yan Liu, De Ma, Huajin Tang, Qian Zheng, Gang Pan
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct both quantitative and qualitative experiments on synthetic and real-world data. Experimental results demonstrate that EDy GS effectively handles blurry inputs in dynamic scenes. |
| Researcher Affiliation | Academia | Mengxu Lu1,2 , Zehao Chen1,2 , Yan Liu1,2 , De Ma1,2 , Huajin Tang1,2 , Qian Zheng1,2 and Gang Pan1,2 1The State Key Lab of Brain-Machine Intelligence, Zhejiang University 2College of Computer Science and Technology, Zhejiang University |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical formulas but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source code and Supplementary Materials are available at: https://github.com/zju-bmi-lab/EDyGS |
| Open Datasets | Yes | Synthetic Data. Since no publicly available dynamic scene datasets contain both blurry images and event streams, we collect four synthetic scenarios. The data for these scenes is sourced from [Wu et al., 2024a]. Real-world Data. We use the DAVIS346C [Taverni et al., 2018] to capture six real-world dynamic scenes with both RGB frames and spatial-temporal aligned event streams. |
| Dataset Splits | No | The paper describes the datasets used (synthetic and real-world) and mentions evaluating on 'input view' and 'novel view', but does not explicitly provide details about specific training, validation, or test splits (e.g., percentages, sample counts, or predefined split references). |
| Hardware Specification | Yes | Dy Blu RF...its training and rendering process are relatively slow (2 days for training on an RTX A6000 GPU). We integrate event streams...with 3DGS (1.5 hours for training on an RTX 3090 GPU). |
| Software Dependencies | No | The paper mentions using COLMAP [Schonberger and Frahm, 2016] and the EDI model [Pan et al., 2019] but does not provide specific version numbers for these or any other software libraries or tools. |
| Experiment Setup | Yes | L = (1 λ)L1 + λLD-SSIM, Levent = L((ln( ˆCi+1) ln( ˆCi)), X e Ei i+1 eΘ), Ledi = L( ˆCi, ˆBi) + L( ˆCi st, ˆBi(1 Mi)), Lreg = ||Mi||1, where Θ is the threshold of the event camera, and λ is 0.2. |