Point Cloud Dataset Distillation

Authors: Deyu Bo, Xinchao Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate the effectiveness of DD3D in shape classification and part segmentation tasks across diverse scenarios, such as cross-architecture and cross-resolution settings. [...] 5. Experiments We benchmark our method on two fundamental tasks of point cloud analysis: shape classification (Section 5.1) and part segmentation (Section 5.2), followed by a series of analyses, including generalization (Section 5.3), ablation (Section 5.4), and visualization (Section 5.5).
Researcher Affiliation Academia 1National University of Singapore. Correspondence to: Xinchao Wang <EMAIL>.
Pseudocode Yes Algorithm 1 DD3D for part segmentation
Open Source Code No A recent work1 also applies GM to point cloud data. However, neither of them considers the orientation and resolution issues. [...] 1https://github.com/kghandour/dd3d Explanation: The provided link refers to 'A recent work1' which is distinct from the authors' own method (DD3D) described in this paper, making it ambiguous whether this is the source code for the current paper's methodology.
Open Datasets Yes Datasets. We employ three datasets of different scales for the shape classification task: (i) Scan Object NN (OBJ BG) (Uy et al., 2019) is the smallest dataset but consists of real-world data, which is challenging to distillate. (ii) Model Net40 (Wu et al., 2015) is a larger synthetic dataset generated from CAD models. (iii) MVPNet (Yu et al., 2023) is the largest dataset, containing 87K point clouds scanned from real-world videos. We use its subset MVPNet100, which includes data from the 100 most populous categories, to alleviate the influence of long-tail distribution, similar to the CAFIR-100 dataset. For the part segmentation task, we follow Qi et al. (2017a) and choose Shape Net-part (Yi et al., 2016) dataset for evaluation. All the datasets use the standard data splits, and their detailed statistic information can be found in Appendix C. [...] Appendix C. Details of Datasets [...] Scan Object NN: https://github.com/feiran-l/rotation-invariant-pointcloud-analysis Model Net40: http://modelnet.cs.princeton.edu/Model Net40.zip MVPNet: https://github.com/GAP-LAB-CUHK-SZ/MVImg Net Shape Net: https://github.com/feiran-l/rotation-invariant-pointcloud-analysis
Dataset Splits Yes All the datasets use the standard data splits, and their detailed statistic information can be found in Appendix C. [...] Table 6: Details of datasets [...] # Training Samples [...] # Validation Samples
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) were mentioned in the paper.
Software Dependencies No The Py Torch code is shown in Algorithm 2, where some details are highlighted. [...] 1 import torch [...] 3 import SIREN. Explanation: While PyTorch and SIREN are mentioned, specific version numbers for these software components are not provided.
Experiment Setup Yes Appendix D. Hyperparameters The hyperparameters of baselines and DD3D are listed in Tables 7 and 8, respectively. [...] Table 7: Hyperparameters used for Data Synthesis. Optimizer Adam Initial LR 0.001 Batch Size 32 Iterations 200 Weight Decay 0.0005 Augmentation Scale, Jitter, Rotate Scheduler Step LR (Decay 0.1 / 100 iter) [...] Table 8: Hyperparameters used for Validation. Optimizer Adam Initial LR 0.001 Batch Size 8 Epochs 200 Weight Decay 0.0005 Augmentation Scale, Jitter, Rotate Scheduler Step LR (Decay 0.1 / 100 epoch)