Rotation Invariant Spatial Networks for Single-View Point Cloud Classification

Authors: Feng Luan, Jiarui Hu, Changshi Zhou, Zhipeng Wang, Jiguang Yue, Yanmin Zhou, Bin He

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that this network performs better than other state-of-the-art methods, evaluated on four public datasets. We achieved an overall accuracy of 94.7% (+2.0%) on Model Net40, 93.4% (+5.9%) on MVP, 94.7% (+6.3%) on PCN and 94.8% (+1.7%) on Scan Object NN. Our project website is https: //luxurylf.github.io/RISpa Net project/.
Researcher Affiliation Academia 1Shanghai Research Institute for Intelligent Autonomous Systems 2National Key Laboratory of Autonomous Intelligent Unmanned Systems, Tongji University 3Frontiers Science Center for Intelligent Autonomous Systems 4College of Electronics and Information Engineering, Tongji University EMAIL
Pseudocode No The paper describes the methodology using textual explanations and diagrams (Figure 2, 3, 4, 5) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our project website is https: //luxurylf.github.io/RISpa Net project/.
Open Datasets Yes We achieved an overall accuracy of 94.7% (+2.0%) on Model Net40, 93.4% (+5.9%) on MVP, 94.7% (+6.3%) on PCN and 94.8% (+1.7%) on Scan Object NN. ... The network was tested on an NVIDIA RTX 3080 Ti GPU with four datasets: Molde Net40 [Wu et al., 2015], MVP [Pan et al., 2023], PCN [Yuan et al., 2018] and Scan Object NN [Uy et al., 2019].
Dataset Splits No The paper states that experiments were performed on Model Net40, MVP, PCN, and Scan Object NN datasets, and mentions different rotation scenarios for training and testing (z/z, z/SO3, SO3/SO3). However, it does not provide specific numerical percentages or sample counts for training, validation, or test splits for any of these datasets in the main text.
Hardware Specification Yes The network was tested on an NVIDIA RTX 3080 Ti GPU with four datasets: Molde Net40 [Wu et al., 2015], MVP [Pan et al., 2023], PCN [Yuan et al., 2018] and Scan Object NN [Uy et al., 2019].
Software Dependencies No The paper mentions using the Adam optimizer and notes that results from other works were trained on their official code. However, it does not specify versions for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes The Adam optimizer was used. The learning rate started at 0.00033 and was reduced by 20% every 20 epochs. The batch for training was 56. ... For training in the two-branch way, the Adam optimizer was used. The learning rate started at 0.00044 and was reduced by 15% every 10 epochs. The batch for training was 40. As for λ, it was set to 0.001 for the first 10 epochs, 0.0001 for epochs 11 to 20 and 0.00001 for subsequent epochs.