SymmCompletion: High-Fidelity and High-Consistency Point Cloud Completion with Symmetry Guidance

Authors: Hongyu Yan, Zijun Li, Kunming Luo, Li Lu, Ping Tan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Qualitative and quantitative evaluations on several benchmark datasets demonstrate that our method outperforms state-of-the-art completion networks.
Researcher Affiliation Academia Hongyu Yan1*, Zijun Li2*, Kunming Luo1, Li Lu2 , Ping Tan1 1Hong Kong University of Science and Technology 2Sichuan University EMAIL, EMAIL EMAIL, EMAIL
Pseudocode No The paper describes the Local Symmetry Transformation Network (LSTNet) and Symmetry-Guidance Transformer (SGFormer) through descriptive text and architectural diagrams (Figures 1, 3, 4) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/Hongyu Yann/Symm Completion.git
Open Datasets Yes In our experiment, we use three wildly adopted synthetic datasets for training and evaluation, including the PCN dataset (Yuan et al. 2018), MVP dataset (Pan et al. 2021), and Shape Net55/34 dataset (Yu et al. 2021). Additionally, we test our method on the KITTI (Geiger et al. 2013) dataset to evaluate the network s generalization ability in real-world scenarios.
Dataset Splits Yes In our experiment, we use three wildly adopted synthetic datasets for training and evaluation, including the PCN dataset (Yuan et al. 2018), MVP dataset (Pan et al. 2021), and Shape Net55/34 dataset (Yu et al. 2021). Additionally, we test our method on the KITTI (Geiger et al. 2013) dataset to evaluate the network s generalization ability in real-world scenarios. Following previous methods (Yu et al. 2021; Zhu et al. 2023), we study the generalization capability of Symm Completion on the 34 seen categories and 21 unseen categories.
Hardware Specification No The paper does not explicitly provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup No The paper does not explicitly detail specific experimental setup parameters such as hyperparameters (learning rate, batch size, number of epochs), optimizer settings, or training schedules in the main text.