Partial Point Cloud Registration with Multi-view 2D Image Learning
Authors: Yue Zhang, Yue Wu, Wenping Ma, Maoguo Gong, Hao Li, Biao Hou
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments perform that our method outperform the state-of-the-art method without additional 2D training data. We conduct experiments on extensive benchmark datasets, and the experimental results demonstrate the performance of our method. |
| Researcher Affiliation | Academia | Yue Zhang, Yue Wu*, Wenping Ma*, Maoguo Gong, Hao Li, Biao Hou Xidian University, China EMAIL, {ywu@, wpma@mail., haoli@}xidian.edu.cn, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in prose and through architectural diagrams (Figure 1 and Figure 2), but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement regarding the release of source code, nor does it include a link to a code repository. |
| Open Datasets | Yes | Model Net40 The Model Net40 (Wu et al. 2015) consists of 12,311 meshed CAD models from 40 categories, where 9,843 models are used for training and 2,468 models are used for testing. ... 3DMatch (Zeng et al. 2017) contains 62 scenes among which 46 are used for training, 8 for validation and 8 for testing. |
| Dataset Splits | Yes | The Model Net40 (Wu et al. 2015) consists of 12,311 meshed CAD models from 40 categories, where 9,843 models are used for training and 2,468 models are used for testing. ... 3DMatch (Zeng et al. 2017) contains 62 scenes among which 46 are used for training, 8 for validation and 8 for testing. |
| Hardware Specification | Yes | All experiments run on the AMD Ryzen 9 5950X CPU at 3.4GHz and single Nvidia RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions using the AdamW optimizer and refers to models like ResNet, ViT, and KPConv, but does not provide specific version numbers for any software libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | We use the Adam W (Loshchilov and Hutter 2017) optimizer to train our network, starting with a 0.0001 learning rate and 0.0001 weight decay. For Model Net40, we train for 200 epochs with a batch size of 4, and multiply the learning rate by 0.5 at epoch 70. For 3DMatch, we train for 70 epochs with a batch size of 4, halving the learning rate every 20 epochs. Our 2D and 3D encoder output final high-dimensional features with the dimension c = 256 and H and W in the multi-view projection are set to 224. In loss function, α and β are set to 0.1 and γ is set to 1.0 for all experiments. |