GURecon: Learning Detailed 3D Geometric Uncertainties for Neural Surface Reconstruction
Authors: Zesong Yang, Ru Zhang, Jiale Shi, Zixiang Ai, Boming Zhao, Hujun Bao, Luwei Yang, Zhaopeng Cui
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various datasets demonstrate the superiority of GURecon in modeling 3D geometric uncertainty, as well as its plug-and-play extension to various neural surface representations and improvement on downstream tasks such as incremental reconstruction. ... In this section, we first assess the efficacy of GURecon in uncertainty quantification. Then we perform ablation studies to validate each component within our framework, demonstrating its versatility across different numbers of training images and various neural surface models. |
| Researcher Affiliation | Academia | 1State Key Lab of CAD & CG, Zhejiang University, 2Simon Fraser University |
| Pseudocode | No | The paper describes the method using mathematical equations and textual explanations, but no explicit pseudocode or algorithm blocks are present. |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is released, nor does it provide any links to a code repository. |
| Open Datasets | Yes | Datasets. We evaluate our method over three widely used benchmark datasets: the DTU dataset (Jensen et al. 2014), the Blended MVS dataset (Yao et al. 2020), and the Tank and Template (TNT) dataset (Knapitsch et al. 2017). |
| Dataset Splits | Yes | Therefore, based on the spatial distribution within each scene, we uniformly sample a sparse number of views for the training (<=6) and test (<=3) sets in the DTU dataset, for the Blended MVS and TNT datasets, we uniformly sample 25% images for the training set and choose 4 adjacent images as the test set. |
| Hardware Specification | Yes | For each scene, we sample 1024 rays per batch and train for 50k iterations, which takes nearly 30 minutes on an NVIDIA RTX 3090. |
| Software Dependencies | No | The paper mentions using 'Neu S (Wang et al. 2021)' and 'hash-based Neu S (Zhao et al. 2022)' as neural surface representations, but it does not specify any version numbers for these or other software libraries used in their implementation. |
| Experiment Setup | Yes | Implementation Details. ...For each scene, we sample 1024 rays per batch and train for 50k iterations... We set α1 = 0.1, α2 = 1.0, α3 = 0.1 and α4 = 0.1. |