Multi-Modal Point Cloud Completion with Interleaved Attention Enhanced Transformer

Authors: Chenghao Fang, Jianqing Liang, Jiye Liang, Hangkun Wang, Kaixuan Yao, Feilong Cao

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The paper includes sections like "4 Experiments and Analyses", "4.1 Datasets", "4.2 Experimental Settings", "4.3 Results on Shape Net-Vi PC", "4.4 Generalization Ability Evaluation", and "4.5 Ablation Study". It also presents quantitative experimental results in tables (e.g., Table 1, 2, 3) and qualitative comparisons in figures (e.g., Figure 5, 6, 7), comparing the proposed method with various single-modal and multi-modal point cloud completion methods on benchmark datasets like Shape Net-Vi PC and KITTI. This indicates empirical studies and data analysis.
Researcher Affiliation Academia All authors are affiliated with universities: Shanxi University, Zhejiang Normal University, and Northwest University. This indicates an academic affiliation.
Pseudocode No The paper describes its methodology using figures (Figure 2, 3, 4) and mathematical formulations (e.g., equations 1-13) but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code is freely accessible at https://github.com/doldolOuO/IAET.
Open Datasets Yes The paper uses "Shape Net-Vi PC [Zhang et al., 2021a]" and "KITTI [Geiger et al., 2012]" datasets, both of which are well-known public datasets and are cited with authors and years, indicating concrete access information.
Dataset Splits No The paper states: "Same as with previous methods [Zhang et al., 2021a; Zhu et al., 2024; Aiello et al., 2022; Xu et al., 2024], we conduct known category experiments on 8 categories and generalization experiments on the remaining categories." and "training on eight known categories and testing on four unknown categories.". While it details category splits, it does not provide specific training/test/validation data split percentages or counts within these categories, nor does it refer to a standard train/test/validation split with a citation.
Hardware Specification Yes All experiments are performed on a NVIDIA RTX A6000.
Software Dependencies No The paper mentions using the "Adam [Kingma and Ba, 2015] optimizer" but does not specify version numbers for any key software components, libraries, or programming languages used.
Experiment Setup Yes The paper explicitly states: "We use the Adam [Kingma and Ba, 2015] optimizer with an initial learning rate of 0.001. The learning rate decayed every 20 epochs with a decay rate of 0.7. Our method converges after 200 epochs with a batch size of 64."