SMART-PC: Skeletal Model Adaptation for Robust Test-Time Training in Point Clouds

Authors: Ali Bahri, Moslem Yazdanpanah, Sahar Dastani, Mehrdad Noori, Gustavo Adolfo Vargas Hakim, David Osowiechi, Farzad Beizaee, Ismail Ben Ayed, Christian Desrosiers

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets, including Model Net40-C, Shape Net-C, and Scan Object NN-C, demonstrate that SMARTPC achieves state-of-the-art results, outperforming existing methods such as MATE in terms of both accuracy and computational efficiency.
Researcher Affiliation Academia 1LIVIA, ETS Montr eal, Canada. 2International Laboratory on Learning Systems (ILLS). Correspondence to: Ali Bahri <EMAIL>.
Pseudocode No The paper describes methods using mathematical equations and textual explanations but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The implementation is available at: https:// github.com/Ali Bahri94/SMART-PC.
Open Datasets Yes Extensive experiments on benchmark datasets, including Model Net40-C, Shape Net-C, and Scan Object NN-C, demonstrate that SMARTPC achieves state-of-the-art results... Model Net-40C (Sun et al., 2022) serves as a robustness benchmark... Shape Net Core-v2 (Chang et al., 2015) is a widely used dataset... Scan Object NN (Uy et al., 2019) is a real-world dataset...
Dataset Splits Yes Shape Net Core-v2 (Chang et al., 2015) is a widely used dataset for point cloud classification, containing 51,127 3D shapes spanning 55 categories. It is partitioned into three subsets: 70% for training, 10% for validation, and 20% for testing... Scan Object NN (Uy et al., 2019) is a real-world dataset for point cloud classification, comprising 2,309 training samples and 581 testing samples across 15 categories.
Hardware Specification Yes All experiments were conducted using a single NVIDIA A6000 GPU, ensuring consistency across all tested configurations.
Software Dependencies No Our approach was implemented using Py Torch, with the codebase organized into two main components: Pretrain and Adaptation. However, no specific version numbers for Py Torch or other software dependencies are provided.
Experiment Setup Yes The batch size is set to 1 for both adaptation modes, with 1 iteration for the online mode and 20 iterations for the standard mode, identical to the MATE paper for fair comparison once again. The optimizer and learning rate are identical to those used in the MATE paper. For augmentation, scale-transfer is used during pre-training, similar to MATE.