AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction

Authors: Jingnan Gao, Zhuo Chen, Xiaokang Yang, Yichao Yan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis. ... 4 EXPERIMENTS ... 4.1 EXPERIMENT SETUPS ... 4.2 COMPARISONS ... 4.3 ABLATION STUDIES
Researcher Affiliation Academia Jingnan Gao Zhuo Chen Xiaokang Yang Yichao Yan Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University.
Pseudocode No The paper describes methods in prose and mathematical equations (e.g., equations 1-18) but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides a project website link (https://g-1nonly.github.io/Ani SDF_Website/) but does not contain an explicit statement about the release of source code or a direct link to a code repository for the methodology described.
Open Datasets Yes In our experiment, we use Ne RF Synthetic dataset Mildenhall et al. (2020), DTU dataset Wang et al. (2021a), Shiny Blender dataset Verbin et al. (2022), Shelly dataset Wang et al. (2023d) for training and evaluation.
Dataset Splits No The paper states that datasets such as Ne RF Synthetic, DTU, Shiny Blender, and Shelly datasets were used for 'training and evaluation' but does not provide specific details on the training, validation, or test splits used for these datasets.
Hardware Specification Yes Our model is trained using a single Tesla V100 for around 2-3 hours
Software Dependencies No The paper mentions using 'marching cubes as the mesh extraction tools' but does not specify any software names with version numbers or specific library dependencies used for the experiments.
Experiment Setup Yes Our model is trained using a single Tesla V100 for around 2-3 hours and the hyperparameters for the loss function in our method are set to be: λ1 = 0.1, λ2 = 0.001, λ3 = 0.001, λ4 = 0.01. Our coarse-grid is from level 4 to 10 (m), and fine-grid is from 10 (m) to 16 (L), both with 2 as feature dimension. ... Both the geometry network MLP and View.MLP have 2 hidden layers with 64 neurons. The Ref.MLP has 2 hidden layers with 128 neurons and the Weight.MLP has 1 hidden layers with 64 neurons.