Cross-PCR: A Robust Cross-Source Point Cloud Registration Framework

Authors: Guiyu Zhao, Zhentao Guo, Zewen Du, Hongbin Ma

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our Cross-PCR on both cross-source and same-source datasets. On the 3DCSR dataset (Huang et al. 2021b), we achieve the best performance with great improvement. Our method successfully achieves the challenging registration from depth camera to Li DAR, with a 57.6 percentage point (pp) improvement in registration recall (RR) and 63.5 pp in feature matching recall (FMR). It also achieves the best performance on 3DMatch, while maintaining robustness under diverse downsampling densities. ... The results on Kinect-SFM are shown in Table 1. ... Table 2: Quantitative results on 3DMatch and 3DLo Match. ... Table 3: Quantitative results on the 3DMatch-DD benchmark. ... Ablation Study.
Researcher Affiliation Academia Guiyu Zhao, Zhentao Guo, Zewen Du, Hongbin Ma* School of Automation, Beijing Institute of Technology EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual explanations and mathematical formulas, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured code-like procedures.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide any links to a code repository.
Open Datasets Yes We evaluate our Cross-PCR on both cross-source dataset 3DCSR (Huang et al. 2021b) and same-source dataset 3DMatch (Zeng et al. 2017). ... 3DCSR dataset (Huang et al. 2021b) is an indoor cross-source dataset ... The 3DMatch dataset (Zeng et al. 2017) is a large indoor dataset...
Dataset Splits Yes The 3DMatch dataset (Zeng et al. 2017) is a large indoor dataset containing 62 scenarios, of which 46 are used for training, 8 for validation, and 8 for testing. Following (Huang et al. 2021a), point cloud pairs with overlap > 30% are split as 3DMatch, and those with 10% 30% overlap are split as 3DLo Match (Huang et al. 2021a). ... Following (Huang et al. 2021a), we preprocess the original 3DMatch dataset. Then, we only downsample the target point cloud by using voxel downsample to simulate the large density difference between the cross-source point cloud pair. By setting no voxel downsampling, voxel downsampling with a 0.05m side length, and with a 0.1m side length, we conduct three experiments.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions various methods and backbones (e.g., KPConv-FPN, Transformer), but does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch 1.x or CUDA 11.x) that would be needed to replicate the experiments.
Experiment Setup No Loss Function The loss function L = Ls+Ld is composed of sparse matching loss and dense matching loss. To improve robustness to low overlap, the sparse matching loss uses the overlap-aware circle loss (Qin et al. 2022). For efficiency, so we only use circle loss (Sun et al. 2020) to supervise metric learning of dense point features. ... where τ0 is a distance threshold (i.e. 0.1m). ... For other methods, we employ a 50k RANSAC to estimate transformation. While the paper describes the loss function, evaluation metrics, and a distance threshold (τ0), it does not provide concrete hyperparameters for training such as learning rates, batch sizes, number of epochs, or optimizer configurations in the main text.