Dual Manifold Regularization Steered Robust Representation Learning for Point Cloud Analysis

Authors: Jian Bi, Qianliang Wu, Jianjun Qian, Lei Luo, Jian Yang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that our method outperforms traditional supervised learning and single-manifold regularization techniques in point cloud analysis. Specifically, for shape classification, DMR achieves a new State-Of-The-Art (SOTA) performance with 94.8% Overall Accuracy (OA) on Model Net40 and 90.7% OA on Scan Object NN, surpassing the recent SOTA model without increasing the baseline parameters.
Researcher Affiliation Academia Jian Bi, Qianliang Wu, Jianjun Qian, Lei Luo*, Jian Yang* PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education School of Computer Science and Engineering, Nanjing University of Science and Technology, China EMAIL
Pseudocode No The paper describes the method using prose and mathematical equations, such as Eq. (1) to (17), but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes More additional ablation experiments and code are in the supplementary materials.
Open Datasets Yes Our evaluations utilize three datasets: Model Net40 (Wu et al. 2015), Scan Object NN (Uy et al. 2019), and Shape Net Part (Yi et al. 2016) to demonstrate our DMR effectiveness.
Dataset Splits Yes Model Net40 (Wu et al. 2015) is a commonly used point cloud classification dataset, which has 40 object categories containing 9843 training models and 2468 test models.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup No The paper mentions using PointNet++, DGCNN, and Point MLP as backbones but does not provide specific hyperparameters such as learning rate, batch size, or number of epochs, or other concrete experimental setup details within the main text.