Implicit Neural Surface Deformation with Explicit Velocity Fields
Authors: Lu Sang, Zehranaz Canfes, Dongliang Cao, Florian Bernard, Daniel Cremers
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate that our method significantly outperforms existing works, delivering superior results in both quality and efficiency. ... We validate our method on different datasets and demonstrate that our methods give rise to high-quality interpolations for challenging inputs, both quantitatively and qualitatively. |
| Researcher Affiliation | Academia | 1Technical University of Munich, 2Munich Center of Machine Learning EMAIL 3University of Bonn EMAIL |
| Pseudocode | No | The paper describes the methodology in detailed text and mathematical equations but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1the code is available: https://github.com/Sangluisme/Implicit-surf-Deformation |
| Open Datasets | Yes | We evaluated our methods using several datasets: Faust Bogo et al. (2014), SMAL Zuffi et al. (2017), SHREC16 Cosmo et al. (2016) and Deforming Things4D Li et al. (2021). ...To quantitatively evaluate the interpolated meshes, we use the fox and bear animation from the Deforming Things4D Li et al. (2021) dataset. ...To quantitatively evaluate our method, we use the 4D-Dress dataset Wang et al. (2024) |
| Dataset Splits | No | To generate training data, we sample 20,000 points on the surface of each mesh to create point clouds with partial correspondences. Each point cloud maintains ground-truth correspondences between 5% to 20% of its points. ... We sample correspondences in different proportions to the point cloud numbers: 1%, 5%, 10%, 20% to test the recovered intermediate mesh quality. |
| Hardware Specification | Yes | The run time is approximately 20 minutes on a Ge Force GTX TITAN X GPU with CUDA for each pair. |
| Software Dependencies | No | We implement our code using Jax Bradbury et al. (2018) to enable fast higher-order derivative computations. |
| Experiment Setup | Yes | We train for a total of 10,000 epochs with batch size 4,000. The run time is approximately 20 minutes on a Ge Force GTX TITAN X GPU with CUDA for each pair. ...We set the learning rate to 0.005 with a decay rate 0.5 within interval 2000. ...for the experiment showed on the paper, we set λf = 100, λm = 200, λv = 20 and λl = 10. |