Unpaired Point Cloud Completion via Unbalanced Optimal Transport
Authors: Taekyung Lee, Jaemoo Choi, Jaewoong Choi, Myungjoo Kang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that UOT-UPC achieves state-ofthe-art performance on unpaired point cloud completion benchmarks across both single-category and multi-category settings. Furthermore, UOT-UPC exhibits particularly robust performance under class imbalance, where incomplete and complete distributions consist of multiple categories in different proportions. The UOT framework provides our model with inherent robustness against class imbalance, further enhancing its effectiveness in real-world scenarios. |
| Researcher Affiliation | Academia | 1IPAI (Interdisciplinary Program in Artificial Intelligence, Seoul National University) 2Georgia Institute of Technology 3Sungkyunkwan University 4Department of Mathematical Sciences and RIMS, Seoul National University. |
| Pseudocode | Yes | Algorithm 1 Training algorithm of UOT-UPC |
| Open Source Code | Yes | The code is available at https: //github.com/LEETK99/UOT-UPC. |
| Open Datasets | Yes | We assess our UOT-UPC model on the unpaired point cloud completion benchmarks under two settings: (1) Real Data Completion (USSPA dataset (Ma et al., 2023)) and (2) Synthetic Data Completion (PCN dataset (Yuan et al., 2018)). For the experiment is conducted on paired completion data from Shape Net (Chang et al., 2015). (See Appendix C.2 for qualitative results on the real-world KITTI dataset (Geiger et al., 2012)). |
| Dataset Splits | No | The paper does not explicitly provide specific dataset split percentages, sample counts for train/test/validation, or references to predefined splits with full details needed for reproduction. It mentions using 'training data' and 'test dataset' but without specifying how these splits are defined (e.g., 80/10/10 split). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions general aspects of software and training configurations. |
| Software Dependencies | No | The paper mentions using 'Infocd as the cost function c', 'Adam optimizer', and 'Softplus activation' but does not specify version numbers for these libraries, Python, PyTorch/TensorFlow, or CUDA. A reproducible description requires specific version numbers for key software components. |
| Experiment Setup | Yes | We employ Infocd as the cost function c with a coordinate value of τ = 0.044. For the hyperparameters of Info CD, we set τinfocd to 2 and λInfo CD to 1.0 10 7. ... We utilize the Adam optimizer with β1 = 0.95, β2 = 0.999 and learning rates of 1.0 10 5 for both the potential vϕ and completion model Tθ, respectively. The training is conducted with a batch size 4. The maximum epoch of training is 480. |