Online 3D Gaussian Splatting Modeling with Novel View Selection
Authors: Byeonggwon Lee, Junkyu Park, Khang Truong Giang, Soohwan Song
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our method outperforms state-of-the-art methods, delivering exceptional performance in complex outdoor scenes. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Artificial Intelligence, Dongguk University, Seoul, Korea 242dot, Seongnam-si, Gyeonggi-do, Korea EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods in paragraph text and equations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code, nor does it include a link to a code repository. |
| Open Datasets | Yes | The proposed method was evaluated using two benchmarks for indoor scenes [Sturm et al., 2012] [Straub et al., 2019]. To highlight its generalization capability, we also extended the evaluation to include challenging outdoor scenarios [Song et al., 2021] [Knapitsch et al., 2017]. |
| Dataset Splits | No | The paper mentions using datasets for indoor and outdoor scenes and states "following the same experimental setups as other methods" but does not explicitly provide specific training/test/validation split information within the paper. |
| Hardware Specification | Yes | All experiments were carried out on a desktop equipped with an AMD Ryzen9 7900X 12core processor and an NVIDIA Ge Force RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions "Training and evaluation were performed in Py Torch with CUDA" but does not specify version numbers for PyTorch or CUDA. |
| Experiment Setup | Yes | While most hyperparameters follow the original 3DGS setting [Kerbl et al., 2023], we empirically set ̘L1, ̘SSIM, ̘depth and ̘smooth to 0.95, 0.2, 0.2, and 0.1, respectively, for the loss function of Gaussian training. |