Top-I2P: Explore Open-Domain Image-to-Point Cloud Registration Using Topology Relationship
Authors: Pei An, Jiaqi Yang, Muyao Peng, You Yang, Qiong Liu, Jie Ma, Liangliang Nan
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 7Scenes, RGBD-V2, Scan Net, and self-collected I2P datasets demonstrate that Top-I2P achieves superior registration performance in open-domain scenarios. |
| Researcher Affiliation | Academia | 1Huazhong University of Science and Technology, China 2Northwestern Polytechnical University, China 3Delft University of Technology, Netherlands EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical equations (e.g., Eqs. 1-10) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information for source code, such as a repository link, an explicit code release statement, or mention of code in supplementary materials for the methodology described. |
| Open Datasets | Yes | Extensive experiments on 7Scenes, RGBD-V2, Scan Net, and self-collected I2P datasets demonstrate that Top-I2P achieves superior registration performance in open-domain scenarios. ... 7-Scenes [Glocker et al., 2013], RGBD-V2 [Lai et al., 2014], Scan Net [Dai et al., 2017] |
| Dataset Splits | Yes | One scene is chosen for training, while the remaining six scenes are used for testing. Image and point cloud pairs are created from the RGB-D data. |
| Hardware Specification | No | The paper discusses memory usage in Table 8 (e.g., 'Memory usage/MB'), but it does not specify exact GPU/CPU models, processor types, or detailed computer specifications used for running its experiments. |
| Software Dependencies | No | The paper mentions specific models and layers used (e.g., Res Net, KPConv, GCN layers, Sinkhorn layer, SAM) with their corresponding citations, but it does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) needed to replicate the experiment. |
| Experiment Setup | Yes | The stack number of the message passing submodule in FTI is 2. The first training stage consists of 5 epochs, while the second stage consists of 20 epochs. To generate the ground truth (GT) Atop, we set an intersection-over-union (IoU) threshold γIoU = 0.1. ... α is set to 0.86. ... The learning rate and optimizer of Top-I2P are the same in the literature [Li et al., 2023]. |