Hyperbolic-Constraint Point Cloud Reconstruction from Single RGB-D Images

Authors: Wenrui Li, Zhe Yang, Wei Han, Hengyu Man, Xingtao Wang, Xiaopeng Fan

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted object reconstruction experiments primarily on the CO3D-v2 dataset (Reizenstein et al. 2021). To evaluate the performance of our Hc PCR model, we compared it with the latest baseline models considered the best in this field. The evaluation metrics include the L1 distances for accuracy (Acc) and completeness (Comp), their sum as the Chamfer distance (CD), precision (Prec), recall, and the F-score, which is the harmonic mean of precision and recall. Ablation studies highlight the significance of our model and its individual components.
Researcher Affiliation Academia 1Harbin Institute of Technology 2University of Electronic Science and Technology of China 3Harbin Institute of Technology Suzhou Research Institute 4Peng Cheng Laboratory EMAIL; EMAIL; EMAIL; EMAIL; EMAIL; EMAIL
Pseudocode No The paper describes the methodology using text and mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes We conducted object reconstruction experiments primarily on the CO3D-v2 dataset (Reizenstein et al. 2021).
Dataset Splits Yes We followed the NU-MCC setup, using 10 categories for evaluation and the remaining 41 for training.
Hardware Specification Yes The Hc PCR model was trained on an Nvidia A100 GPU.
Software Dependencies No The paper mentions using the Adam optimizer and Vi T but does not specify software names with version numbers for libraries, frameworks, or programming languages used for implementation.
Experiment Setup Yes For Hc PCR, we set the curvature k to -0.14, α to 2.0, γ0 to 1000, and ε to 4. The initial learning rate was set to 0.0001. We used the Adam optimizer and trained for 100 epochs.