Sharpening Neural Implicit Functions with Frequency Consolidation Priors
Authors: Chao Chen, Yu-Shen Liu, Zhizhong Han
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluations under widely used benchmarks or real scenes show that our method can recover high frequency component and produce more accurate surfaces than the latest methods. Our evaluations on Shape Net (Chang et al. 2015), ABC (Koch et al. 2019), and Scan Net (Dai et al. 2017) datasets. |
| Researcher Affiliation | Academia | 1School of Software, Tsinghua University, Beijing, China 2Department of Computer Science, Wayne State University, Detroit, USA EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using text and figures, but no specific pseudocode or algorithm block is present. |
| Open Source Code | Yes | Code https://github.com/chenchao15/FCP |
| Open Datasets | Yes | We evaluate our method on Shape Net (Chang et al. 2015), ABC (Koch et al. 2019), and Scan Net (Dai et al. 2017) datasets. |
| Dataset Splits | Yes | For ABC and Scan Net, we follow the train/test splits from Points2Surf (Erler et al. 2020) and Neural Part Priors (Bokhovkin and Dai 2022). |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions the use of the Adam optimizer, Re LU activation, marching cubes algorithm, Fast Fourier Transform (FFT), and CLIP image encoder, but it does not provide specific version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow, Python versions). |
| Experiment Setup | Yes | We set σ1 to 8 for full-space sampling, allowing the network to perceive a large space and cover various shape variations. σ2 is set to 0.2, enabling queries to be sampled close to the surface. These two types of queries are sampled with a one-to-one weighting ratio, dynamically sampling 16,384 queries in each iteration. We learn e L and e F by 3 fully connected layers with 128 hidden units and a Re LU on each layer. We employ two SDF-decoder networks similar to Deep SDF (Park et al. 2019). The Adam optimizer is used with an initial embedding learning rate of 0.0005 and an SDF-decoder learning rate of 0.001, both decreased by 0.5 every 500 epochs. We train our model in 2000 epochs. During test-time optimization, we overfit f L on a low frequency observation in 800 iterations with a learning rate of 0.005. |