Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
UniPCGC: Towards Practical Point Cloud Geometry Compression via an Efficient Unified Approach
Authors: Kangli Wang, Wei Gao
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental Results Experiment Setup Dataset. In our experiment, we use the Shape Net dataset and the selected dense point clouds. Shape Net (Chang et al. 2015) is a 3D object CAD model dataset, encompassing a subset known as Shape Net-Core. ... Compared to G-PCC, we achieve a compression ratio (CR)gain of 19.6%. ... Table 1: Performance of lossless methods on the 8i VFB testset under the same training conditions. ... Figure 5: Performance comparison using rate-distortion curves. ... Ablation Studies Table 3 presents the ablation results of proposed UELC. ... Table 4 is the ablation experiment of VRCM. |
| Researcher Affiliation | Academia | 1Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, Shenzhen Graduate School, Peking University, Shenzhen 518055, China 2Peng Cheng Laboratory, Shenzhen, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methods UELC and VRCM through textual descriptions, numbered steps, and figures, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the code for the described methodology, nor does it include a direct link to a code repository. It mentions 'Open Point Cloud: An open-source algorithm library of deep learning based point cloud compression.' (Gao et al. 2022) in related works, but this refers to a previous work, not the current paper's implementation. |
| Open Datasets | Yes | Dataset. In our experiment, we use the Shape Net dataset and the selected dense point clouds. Shape Net (Chang et al. 2015) is a 3D object CAD model dataset... For dense point clouds, we choose from MPEG PCC dataset... We also employ the 8i VFB and MVUB sequences for training and 8i VFB sequences for testing to align with popular lossless compression methods. |
| Dataset Splits | Yes | We use the processed shapenet dataset for training of the proposed Uni PCGC. For dense point clouds, we choose from MPEG PCC dataset, including longdress 1300, redandblack 1550, soldier 0690, loot 1200, queen 0200, basketball player 0200, and dancer 0001. We select these point cloud as testset for lossy and lossless compression. We also employ the 8i VFB and MVUB sequences for training and 8i VFB sequences for testing to align with popular lossless compression methods. |
| Hardware Specification | Yes | We use Minkowski Engine (Choy, Gwak, and Savarese 2019) and Pytorch to build our model, and perform UELC and VRCM training on RTX4080 GPU and intel i5-13600KF CPU. ... Uni PCGC, GPCC v23 and Sparse PCGC are tested using RTX 4080 GPU and intel i5-13600KF CPU for a fair runtime comparison (Mark with *). |
| Software Dependencies | No | We use Minkowski Engine (Choy, Gwak, and Savarese 2019) and Pytorch to build our model... The paper does not specify version numbers for Minkowski Engine or Pytorch. |
| Experiment Setup | Yes | Training Strategies. ... For UELC, it is trained for lossless compression using the loss function equation (7). We initialize the learning rate to 8 × 10−4 and gradually decrease it to 5 × 10−5 during training. We use the Adam optimizer and train for 30 epochs with a batch size of 8. For VRCM, it is trained for lossy compression using the loss function equation (8) and (9). We initialize the learning rate to 8 × 10−4 and gradually decrease it to 1 × 10−5 during training. We use the Adam optimizer and train for 20 epochs with a batch size of 8. We set the λ to 0.3 and 3.0. |