PointDGMamba: Domain Generalization of Point Cloud Classification via Generalized State Space Model

Authors: Hao Yang, Qianyu Zhou, Haijia Sun, Xiangtai Li, Fengqi Liu, Xuequan Lu, Lizhuang Ma, Shuicheng Yan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness and state-of-the-art performance of Point DGMamba. ... Experiments Experiment Settings Implementation. ... Benchmark. ... Point DG-3to1 Benchmark ... Comparisons to the State-of-the-art Methods ... Ablation Study ... Visualization and Analysis Feature Visualization.
Researcher Affiliation Collaboration 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China 2School of Information Management, Nanjing University, Nanjing, China 3Skywork AI, Singapore 4Department of Computer Science and Software Engineering, The University of Western Australia, Australia 5Nanyang Technological University, Singapore
Pseudocode No The paper describes methods through textual descriptions and mathematical formulations, such as in the 'Masked Sequence Denoising', 'Sequence-wise Cross-domain Feature Aggregation', and 'Dual-level Domain Scanning' sections, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or code-like formatted procedures.
Open Source Code Yes Code https://github.com/yxltya/Point DGMamba
Open Datasets Yes To evaluate our method, we use the widely-used Point DA-10 (Qin et al. 2019) benchmark, which consists of Model Net-10(M), Shape Net-10(S), and Scan Net10(S*)... The Point DG-3to1 benchmark includes four sub-datasets: Model Net-5 (A), Scan Net-5(B), Shape Net-5 (C), and 3DFUTURE-Completion (D). ... The 3D-FUTURE-Completion (Liu et al. 2024a) was generated from the 3D-FUTURE (Fu et al. 2021) dataset...
Dataset Splits Yes In all experiments, both the training and testing sets of the source domains are used, while the target domain only uses the testing set. ... Following the common practice of DG, whole source domains are used for training (the training and testing sets), and the testing set is used for evaluation.
Hardware Specification No The paper discusses the training process details such as optimizer (AdamW), learning rate schedule (cosine decay, warmup), epochs (200), and data augmentation (Point Mix). However, it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions the use of the AdamW optimizer and Point Mix for data augmentation, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) used in the implementation.
Experiment Setup Yes In the training process, we used the AdamW (Loshchilov and Hutter 2017) optimizer with an initial learning rate of 1e 4, a cosine decay schedule, and a weight decay of 1e 4. The number of epochs was set to 200. During the first 5 epochs, we employed a warmup mechanism to gradually increase the learning rate... Cross Entropy loss to measure the difference between the model s prediction and the ground truths. ... We employ random resampling techniques during the data loading to ensure that the number of same-class point clouds in different source domains is consistent.