ComPC: Completing a 3D Point Cloud with 2D Diffusion Priors
Authors: Tianxin Huang, Zhiwen Yan, Yuyang Zhao, Gim H Lee
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic and real-world scanned point clouds demonstrate that our approach outperforms existing methods in completing a variety of objects. Through comprehensive evaluation across various data, we demonstrate that our approach surpasses conventional completion methods in handling both synthetic and real-world scanned point clouds. |
| Researcher Affiliation | Academia | Tianxin Huang, Zhiwen Yan, Yuyang Zhao, Gim Hee Lee School of Computing, National University of Singapore University EMAIL |
| Pseudocode | Yes | Algorithm 1 Gaussian Surface Extraction |
| Open Source Code | No | Our project page is at https://tianxinhuang.github.io/projects/ComPC/. The URL provided is a project demonstration page, not a specific code repository. The paper does not explicitly state that the source code is available. |
| Open Datasets | Yes | For synthetic data, we sample partial point clouds by sampling from various viewpoints around completely modeled objects from established sources (Krishnamurthy & Levoy, 1996; De Carlo et al., 2003; Praun et al., 2000; Lipman et al., 2008). For real scans, we use Redwood (Choi et al., 2016) following SDS-complete (Kasten et al., 2024). Comparisons on Shape Net (Chang et al., 2015) and Kitti (Geiger et al., 2013) are presented in the appendix A. |
| Dataset Splits | No | The paper describes how partial point clouds were generated for testing (e.g., 'sampling from various viewpoints around completely modeled objects' for synthetic data, 'Single scans are used as partial input, while the ground truths are adopted by composing multiple scans' for Redwood, and 'By merging 1, 3, and 7 consecutive depth maps, we generate partial point clouds with different levels of incompleteness'). However, it does not specify explicit training/validation/test splits with percentages, sample counts, or citations to predefined splits for a model training process, as this is a test-time framework. |
| Hardware Specification | Yes | Our experiments are conducted on RTX A6000/A5000 GPU, with PyTorch 1.12 and CUDA 11.6. For instance, completing a point cloud from the Redwood dataset takes approximately 15 minutes with our method on a RTX A6000 GPU. |
| Software Dependencies | Yes | Our experiments are conducted on RTX A6000/A5000 GPU, with PyTorch 1.12 and CUDA 11.6. |
| Experiment Setup | Yes | In Table 5, we provide detailed information on the hyper-parameters discussed in Sec. 3. Table 5: The setting of mentioned hyper-parameters in Sec. 3. Hyper-parameters w0 w3 1e-3, 1e3, 1e2, 0.1 δ, σ0, σn 0.01, 0.005, 0.05 Iterations 1000 (ZFC), 5000 (PCE) |