PSReg: Prior-guided Sparse Mixture of Experts for Point Cloud Registration

Authors: Xiaoshui Huang, Zhou Huang, Yifan Zuo, Yongshun Gong, Chengdong Zhang, Deyang Liu, Yuming Fang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments demonstrate the effectiveness of our method, achieving state-of-the-art registration recall (95.7%/79.3%) on the 3DMatch/3DLo Match benchmark. Moreover, we also test the performance on Model Net40 and demonstrate excellent performance.
Researcher Affiliation Academia 1Shanghai Jiao Tong University, 2Jiangxi University of Finance and Economics, 3Shandong University, 4Anqing Normal University, 5Huaihua University, 6Jiangxi Provincial Key Laboratory of Multimedia Intelligent Processing
Pseudocode No The paper describes methods and processes (e.g., PSMo E Module, Prior Superpoint Correspondence Prediction, Prior-guided Routing, PSReg Algorithm Design) but does not present them in a structured pseudocode or algorithm block format.
Open Source Code No The paper does not explicitly state that source code is provided, nor does it include a link to a code repository. The phrase 'We release our code...' or similar is absent.
Open Datasets Yes We validate the efficacy of our approach on realworld datasets such as 3DMatch/3DLo Match and synthetic datasets like Model Net/Model Lo Net. The 3DMatch (Zeng et al. 2017) comprises data collected from 62 indoor scenes... Model Net40 (Wu et al. 2015) consists of CAD models of 12,311 objects from 40 different categories.
Dataset Splits Yes The 3DMatch (Zeng et al. 2017) comprises data collected from 62 indoor scenes, with 46 scenes designated for training, 8 for validation, and 8 for testing. ... A total of 5,112 samples are used for training, 1,202 samples for validation, and 1,266 samples for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes The optimization of PSReg is to minimize two correspondence supervision losses (Lc, Lf) and a load balancing loss (Lg). The coarse correspondence loss Lc uses the overlap-aware circle loss (Qin et al. 2022)... The fine correspondence loss Lf uses the negative log-likelihood loss. To encourage a balanced load across experts, we add a load balancing loss Lg, which is the same as in (Fedus, Zoph, and Shazeer 2022). ... We trained PEAL using a 3D prior as the input from scratch for a fair comparison. In addition, we adopted the same iterative update strategy, with 6 iterations.