Generative Point Cloud Registration
Authors: Haobo Jiang, Jin Xie, Jian Yang, Liang Yu, Jianmin Zheng
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 3DMatch and Scan Net datasets verify the effectiveness of our approach. |
| Researcher Affiliation | Collaboration | 1Nanyang Technological University, Singapore 2Nanjing University, China 3Nankai University, China 4Alibaba Group, China. |
| Pseudocode | No | The paper describes methods in prose and equations but does not contain explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our generative 3D registration paradigm is general and could be seamlessly integrated into various registration methods to enhance their performance. Extensive experiments on 3DMatch and Scan Net datasets verify the effectiveness of our approach. [Code] |
| Open Datasets | Yes | Experiments on 3DMatch and Scan Net datasets validate the effectiveness of our proposed method. ... Evaluation on Scan Net. We first perform model evaluation on a widely-used, large-scale indoor benchmark dataset, Scan Net (Dai et al., 2017). ... Evaluation on 3DMatch. We next evaluate our method on 3DMatch (Zeng et al., 2017), another widely-used benchmark dataset for 3D registration. |
| Dataset Splits | Yes | We follow the official data split to divide this dataset into the training, validation, and testing subsets, and construct view pairs by sampling image pairs that are 50 frames apart. ... We follow (El Banani et al., 2021; Yuan et al., 2023) as in Scan Net to produce the pairwise samples. Also, we increase the view separation from 20 to 40, resulting in point cloud pairs with lower overlap to increase the registration challenge. |
| Hardware Specification | Yes | The code for this project is implemented in Py Torch, and all experiments are conducted on a server equipped with an Intel i5 2.2 GHz CPU and a TITAN RTX GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number or list other key software components with their versions. |
| Experiment Setup | Yes | During the few-shot fine-tuning stage, we randomly select 3,000 sample pairs from the Scan-Net training set (Dai et al., 2017) for model fine-tuning. Following the default fine-tuning configuration of Control Net (Zhang et al., 2023), we adopt the Adam W optimizer (Loshchilov, 2017) with a learning rate of 1e-5 and set the training epoch to 10. ... to balance inference efficiency with registration precision, we set drgb = 64 as our default setting. ... a balanced weight (e.g., ω = 0.50) achieves higher performance. As a result, we adopt ω = 0.50 as our default hyperparameter configuration. |