HybridReg: Robust 3D Point Cloud Registration with Hybrid Motions
Authors: Keyu Du, Hao Xu, Haipeng Li, Hong Qu, Chi-Wing Fu, Shuaicheng Liu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show Hybrid Reg s strengths, leading it to achieve state-of-the-art performance on both widely-used indoor and outdoor datasets. Extensive qualitative and quantitative comparisons on both widely-used indoor and outdoor datasets demonstrate the state-of-the-art performance of our approach. Evaluation on Hybrid Match. Ablation Study. |
| Researcher Affiliation | Collaboration | 1University of Electronic Science and Technology of China 2Department of Computer Science and Engineering, CUHK 3Institute of Medical Intelligence and XR, CUHK 4Megvii Technology |
| Pseudocode | No | The paper describes methods through narrative text and mathematical equations in sections like 'Method Overview' and 'Loss Functions', but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/hxwork/Hybrid Reg Py Torch |
| Open Datasets | Yes | datasets like 3DMatch (Zeng et al. 2017), SIRA-PCR (Chen et al. 2023a), and Point Reg GPT (Chen et al. 2024)... beyond the rigid backgrounds provided by 3D-FRONT (Fu et al. 2021a), we simulate non-rigid motions by applying instance-level rigid motions to objects from Shape Net (Chang et al. 2015)... deforming objects from Deforming Things4D (Li et al. 2021). To evaluate the generalizability in outdoor scenes, we transfer models trained on 3DMatch to the ETH dataset (Pomerleau et al. 2012) |
| Dataset Splits | Yes | For each split, we create a validation/test set, consisting of 100/1,000 pairs. To assess robustness in indoor scenes, we employ the 3DMatch dataset (Zeng et al. 2017), comprising 62 scenes, with 46/8/8 scenes used for training/validation/testing. |
| Hardware Specification | Yes | All experiments run on 8 NVIDIA Tesla P40 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch' implicitly via the code repository link and 'Adam optimizer' but does not specify version numbers for any software dependencies like PyTorch, Python, or CUDA. |
| Experiment Setup | Yes | To stabilize training, the uncertainty mask generator is first trained with L1 loss for 30 epochs, then fine-tuned with uncertainty mask loss for 20 epochs. Adam optimizer (Kingma and Ba 2015) is used with a batch size of 8 and a learning rate of 1e-4. |