Position-Aware Guided Point Cloud Completion with CLIP Model

Authors: Feng Zhou, Qi Zhang, Ju Dai, Lei Li, Qing Fan, Junliang Xing

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive quantitative and qualitative experiments demonstrate that our method outperforms state-of-the-art point cloud completion methods. ... Experiments Datasets and Evaluation Metrics ... Ablation Study
Researcher Affiliation Collaboration 1North China University of Technology, Beijing, China 2Peng Cheng Laboratory, Shenzhen, China 3University of Copenhagen, Copenhagen, Denmark 4University of Washington, Washington, USA 5Skywork AI, Beijing, China 6Tsinghua University, Beijing, China
Pseudocode No The paper describes methods and processes in paragraph form and through architectural diagrams but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code or provide any links to code repositories.
Open Datasets Yes PCN: The PCN dataset (Yuan et al. 2018) is a subset of Shape Net dataset (Chang et al. 2015)... MVP: The MVP dataset consists of 16 categories of high-quality pairs of partial and complete point clouds for training and testing... KITTI dataset (Geiger et al. 2013)
Dataset Splits Yes The dataset is partitioned similarly to PCN to ensure a fair comparison of our method with other methods. Concurrently, following prior work, the sampled points are down-sampled to a standardized size of 2,048 points for training purposes.
Hardware Specification Yes The text generation is implemented on a single NVIDIA RTX 4090.
Software Dependencies No The paper mentions models like VIT-16 and CLIP and notes that experiments are conducted under unified settings, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup No The paper states, 'All experiments are conducted under unified settings on the PCN dataset,' and describes a detail of the position-aware module: 'We randomly select one block in each training iteration to learn its parameters while setting the others to a default value of 1.' However, it lacks specific hyperparameters such as learning rate, batch size, or optimizer settings.