LPCG: A Self-conditional Architecture for Labeled Point Cloud Generation
Authors: Dongshuo Huang, Xiaoshui Huang, Chengdong Zhang, Yilei Shi
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the Shape Net dataset demonstrate that LPCG achieves state-of-the-art performance for single class generation. Our experimental results show that the accuracy of our generated label annotations reaches around 97.44% for a two-class generation task. |
| Researcher Affiliation | Academia | 1School of Software, Northwestern Polytechnical University 2School of Public Health, Shanghai Jiao Tong University School of Medicine 3School of Computer and Artificial Intelligence, Huaihua University |
| Pseudocode | No | The paper describes the methodology in prose and does not include any clearly labeled pseudocode or algorithm blocks. The methods are explained in sections like 'Method Architecture Overview', 'Feature Extractor Module', 'Feature Diffusion Modules', and 'Generation Module'. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology. There are no explicit statements about code release, repository links, or mentions of code in supplementary materials. |
| Open Datasets | Yes | Following previous methods (Yang et al. 2019; Mo et al. 2024; Vahdat et al. 2022; Zhou, Du, and Wu 2021), we used Shape Net (Chang et al. 2015) as our dataset. |
| Dataset Splits | No | The paper mentions using Shape Net (Chang et al. 2015) and describes sampling points from it ('sampled 1024 points for each point cloud of Shape Net' and 'sampled 2048 points for each point cloud of Shape Net'). However, it does not explicitly state how the ShapeNet dataset was split into training, validation, and test sets (e.g., specific percentages, sample counts, or predefined split references). |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments. It does not mention any specific machines, cloud resources, or computing environments with detailed specifications. |
| Software Dependencies | No | The paper mentions several software components and models used, such as Point MAE, CLIP, Di T-3D, and Polyscope. However, it does not provide specific version numbers for any of these software dependencies, making replication difficult without explicit versioning. |
| Experiment Setup | Yes | The paper states: 'During the feature extractor pre-training phase, we referred to the processing procedure of point-MAE and sampled 1024 points for each point cloud of Shape Net. In the training phase of the point cloud generator, we referred to the data processing procedure of Di T-3D and sampled 2048 points for each point cloud of Shape Net.' It also provides insights into training duration: 'We evaluated the final generated results by adjusting the training epochs of feature diffusion. As shown in table 6, our generation quality improves when the number of training epochs increases.' |