Matérn Kernels for Tunable Implicit Surface Reconstruction

Authors: Maximilian Weiherer, Bernhard Egger

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluation reveals that Mat ern 1/2 and 3/2 are extremely competitive, outperforming the arc-cosine kernel while being significantly easier to implement (essentially two lines of standard Py Torch code), faster to compute, and scalable. In addition to geometry, we show that Mat ern kernels surpass the arc-cosine kernel in reconstructing other high-frequency scene attributes, such as texture. Finally, we demonstrate that learnable Mat ern kernels (1) outperform the data-dependent arc-cosine kernel (as implemented in the original NKF framework) while being more than four times faster to train, and (2) perform almost on par with highly sophisticated and well-engineered NKSR in the noise-free case while having a more than five times shorter training time.
Researcher Affiliation Academia Maximilian Weiherer & Bernhard Egger Department of Computer Science Friedrich-Alexander-Universit at Erlangen-N urnberg EMAIL
Pseudocode No The paper describes mathematical derivations, theoretical analysis, and experimental results, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes 1Code available at: https://github.com/mweiherer/matern-surface-reconstruction.
Open Datasets Yes We systematically evaluated the effectiveness of Mat ern kernels in the context of implicit surface reconstruction, presenting results on Shape Net (Chang et al., 2015) and the Surface Reconstruction Benchmark (SRB; Berger et al. (2013)) in Section 4.1. Moreover, Section 4.2 demonstrates Mat ern kernels ability to reconstruct high-frequency textures on the Google Scanned Objects (GSO; Downs et al. (2022)) dataset and Objaverse (Deitke et al., 2023). ... They can be downloaded from: https://app.gazebosim.org/dashboard.
Dataset Splits Yes We compare Mat ern kernels in a sparse setting against the arc-cosine kernel on Shape Net (using train/val/test split provided by Mescheder et al. (2019)). To do so, we randomly sample m = 1, 000 surface points with corresponding normals for each shape. ... We evaluate Mat ern kernel s robustness against different noise levels, σ {0, 0.0025, 0.005}, on a subset of the Shape Net dataset which includes approximately 1,700 shapes. To construct the dataset, we downsampled each Shape Net category to include only 5% of the shapes.
Hardware Specification Yes Runtime is measured on a single NVIDIA V100. ... Runtime is measured on a single NVIDIA A100 with a batch size of one to ensure fair comparison.
Software Dependencies No We implemented Mat ern kernels in Py Torch and took the official CUDA implementation of the arc-cosine kernel from NS, eventually integrated into a unified framework to ensure fair comparison. ... We used the official implementation provided here: https://github.com/fwilliams/neural-splines. ... No specific version numbers for PyTorch, CUDA, or other libraries/solvers are explicitly mentioned in the text.
Experiment Setup Yes To do so, we randomly sample m = 1, 000 surface points with corresponding normals for each shape. ... We employ Py Torch s built-in Cholesky solver to numerically stable solve for α and set ϵ = 0.005 for all experiments. ... We used 15,000 Nystr om samples and did the same parameter sweep over the regularization parameter, λ {0, 10 13, 10 12, 10 11, 10 10}. For the rest of the parameters (not mentioned in the paper), default values provided in the repository have been used, except for the grid size which we set to 512. ... We set q = 32 for our experiments. ... we sample m = 1, 000 surface points and corresponding normals for each shape and set h = 1 for all Mat ern kernels. ... Runtime is measured on a single NVIDIA A100 with a batch size of one to ensure fair comparison.