Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
DAPoinTr: Domain Adaptive Point Transformer for Point Cloud Completion
Authors: Yinghui Li, Qianyu Zhou, Jingyu Gong, Ye Zhu, Richard Dazeley, Xinkui Zhao, Xuequan Lu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments with visualization on several domain adaptation benchmarks demonstrate the effectiveness and superiority of our DAPoin Tr compared with state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Information Technology, Deakin University, Australia; 2Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China; 3School of Computer Science and Technology, East China Normal University, Shanghai, China; 4School of Software Technology, Zhejiang University, Ningbo, China; 5Department of Computer Science and Software Engineering, The University of Western Australia, Australia. |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical equations (e.g., Lencq, Ldecq, Lcons, Ltotal) but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/Yinghui-Li-New/DAPoin Tr |
| Open Datasets | Yes | Following protocols of previous UDA PCC methods (Chen, Chen, and Mitra 2019; Gong et al. 2022), we use 3D data from CRN (Wang, Ang Jr, and Lee 2020) as the source domain, and the datasets, including Real-World Scans, 3D-FUTURE (Fu et al. 2021) and Model Net (Wu et al. 2015), as target domains. ... For the benchmark of Real-World Scans, we evaluate the performance on Scan Net (Dai et al. 2017), Matter Port3D (Chang et al. 2017), and KITTI (Geiger, Lenz, and Urtasun 2012) |
| Dataset Splits | No | The paper mentions resampling input scans to 2,048 points and using specific categories from datasets, as well as 'unpaired training and inference'. However, it does not provide explicit training/validation/test split percentages, sample counts for each split, or references to predefined splits with specific citations for reproduction of data partitioning beyond stating which datasets and categories are used for evaluation. |
| Hardware Specification | Yes | All experiments were conducted on an RTX 4090 with 64GB RAM. |
| Software Dependencies | No | The paper mentions using Poin Tr as a backbone and Folding Net/SPD decoder but does not specify version numbers for any software, libraries, or frameworks (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | For the training, we employ the initial learning rate of 2 10 4 and a weight decay of 5 10 5. The batch size is set to 2. To balance losses, weights of α, β, and γ are set as 0.025, 0.25, and 0.01 respectively. |