Rethinking Point Cloud Data Augmentation: Topologically Consistent Deformation
Authors: Jian Bi, Qianliang Wu, Xiang Li, Shuo Chen, Jianjun Qian, Lei Luo, Jian Yang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments demonstrate that our method consistently outperforms existing Mixup and Deformation methods on various benchmark point cloud datasets, improving performance for shape classification and part segmentation tasks. Specifically, when used with Point Net++ and DGCNN, our method achieves a state-of-the-art accuracy of 90.2 in shape classification with the real-world Scan Object NN dataset. We conducted extensive experiments and demonstrated how Sin Point improves the performance of three representative networks on multiple datasets. Our findings show that the augmentations we produce are visually realistic and beneficial to the models, further validating the importance of our approach to understanding the local structure of point clouds. In this section, we demonstrate the effectiveness of our proposed method, Sin Point, with various benchmark datasets and baselines. First, for 3D shape classification, we evaluate the generalization performance and robustness using Sin Point-SSF in classification. Next, we compare our Sin Point-MSF with existing data augmentation methods in part segmentation. More ablation studies and implementation details are provided in Appendix. |
| Researcher Affiliation | Academia | 1PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, School of Computer Science and Engineering, Nanjing University of Science and Technology, China 2VCIP, CS, Nankai University, China 3School of Intelligence Science and Technology, Nanjing University, China. Correspondence to: Lei Luo <EMAIL>, Jian Yang <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Sin Point Without Markov Input: Original point clouds P = {p1, p2, ..., pn} Condition key for Sin Point-SSF or Sin Point-MSF Anchor points number k, Amplitude a Angular velocity ω Output: P |
| Open Source Code | Yes | We release the code at https://github.com/CSBJian/Sin Point. |
| Open Datasets | Yes | Datasets. For classification task, we use two synthetic datasets: Model Net40 (Wu et al., 2015) and Reduced MN40, and two real-world datasets from Scan Object NN (Uy et al., 2019): OBJ ONLY and PB T50 RS. For the part segmentation task, we adopt a synthetic dataset, Shape Net Part (Yi et al., 2016). |
| Dataset Splits | Yes | Model Net40 is a widely used synthetic benchmark dataset containing 9840 CAD models in the training set and 2468 CAD models in the validation set, with a total 40 classes of common object categories. Scan Object NN is a real-world dataset that is split into 80% for training and 20% for evaluation. Among the variants of Scan Object NN, we adopt the simplest version (OBJ ONLY) and the most challenging version (PB T50 RS). OBJ ONLY, which has 2,309 and 581 scanned objects for the training and validation sets, respectively, and PB T50 RS, which is a perturbed version with 11,416 and 2,882 scanned objects for the training and validation sets, respectively. Both have 15 classes. For the part segmentation task, we adopt a synthetic dataset, Shape Net Part (Yi et al., 2016), which contains 14,007 and 2,874 samples for training and validation sets. |
| Hardware Specification | Yes | We conduct experiments using Python and Py Torch with two NVIDIA TITAN RTX for point clouds. |
| Software Dependencies | No | The paper mentions using "Python and Py Torch" for experiments, but does not provide specific version numbers for these or any other software libraries or dependencies. Therefore, it does not meet the requirement for specific version numbers to be reproducible. |
| Experiment Setup | Yes | Following the original configuration in (Qi et al., 2017a;b; Wang et al., 2019b), we use the SGD optimizer with an initial learning rate of 10^-1 and weight decay of 10^-3 for Point Net (Qi et al., 2017a) and Point Net++ (Qi et al., 2017b) and SGD with an initial learning rate of 10^-2 and weight decay of 10^-4 for DGCNN (Wang et al., 2019b). We train models with a batch size of 32 for 300 epochs. For hyperparameters of Sin Point-SSF and Sin Point-MSF, we opt A = 0.6, w = 2.5, k = 4 in the entire experiment. In the Markov chain augmentation process, we choose scaling, shifting, rotation and jittering as the base transformation, then m = 4 and the transition probability is 25%. |