You Should Learn to Stop Denoising on Point Clouds in Advance

Authors: Chuchen Guo, Weijie Zhou, Zheng Liu, Ying He

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and evaluations demonstrate that our method outperforms the state-of-the-art both qualitatively and quantitatively. Experiments Datasets and Settings Training dataset. Our framework s training is conducted utilizing the PUNet dataset (Yu et al. 2018). The training dataset comprises 40 meshes, from which point clouds are derived at 10K, 30K, and 50K resolutions, totaling 120 training point clouds. Testing datasets. We compare our method with competing approaches on the PUNet dataset (Yu et al. 2018) at 10K and 50K resolutions, yielding 40 point clouds. Implementation. Our method, developed in Py Torch, is trained on a NVIDIA Ge Force RTX 3090 GPU, employing the Adam optimizer with a learning rate of 1 10 4. Quantitative Results We evaluate our method and the competing approaches on synthetic data (Yu et al. 2018), as shown in Table 1. Ablation Studies To confirm the effectiveness of our early stopping denoising strategy, we conduct an experiment that removed a key component the adaptive classifier (AC) from our Adaptive Stopping Denoising Network (ASDN) and compared its performance to the full ASDN setup.
Researcher Affiliation Academia 1 School of Computer Science, China University of Geosciences (Wuhan) 2School of Computer Science and Engineering, Nanyang Technological University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Point Cloud Entropy Calculation
Open Source Code Yes Code https://github.com/git-guocc/ASDN
Open Datasets Yes Training dataset. Our framework s training is conducted utilizing the PUNet dataset (Yu et al. 2018). ... Testing datasets. We compare our method with competing approaches on the PUNet dataset (Yu et al. 2018) at 10K and 50K resolutions, yielding 40 point clouds. We also employ the real-scanned Kinect dataset (Wang, Liu, and Tong 2016) to assess the generalization of our method. The Parisrue-Madame dataset (Serna et al. 2014), featuring real Paris street scenes scanned with a 3D mobile laser scanner, was evaluated for its real-world noise and serves as a solid benchmark for assessing our method s performance on actual data.
Dataset Splits No The training dataset comprises 40 meshes, from which point clouds are derived at 10K, 30K, and 50K resolutions, totaling 120 training point clouds. Testing datasets. We compare our method with competing approaches on the PUNet dataset (Yu et al. 2018) at 10K and 50K resolutions, yielding 40 point clouds.
Hardware Specification Yes Our method, developed in Py Torch, is trained on a NVIDIA Ge Force RTX 3090 GPU, employing
Software Dependencies No Our method, developed in Py Torch, is trained on a NVIDIA Ge Force RTX 3090 GPU, employing the Adam optimizer with a learning rate of 1 10 4.
Experiment Setup Yes We set n = 1000 in our ASDN. ... λ ranges from 2 to L, with L being the total number of layers in ASDN, set to 4 in our configuration. ... Our method, developed in Py Torch, is trained on a NVIDIA Ge Force RTX 3090 GPU, employing the Adam optimizer with a learning rate of 1 10 4. Prior to training, point clouds are normalized to a unit sphere. Then, we employ the FPS and KNN algorithms to sample 1K-sized patches. ... The training process commenced with a pre-training phase for the adaptive classifier, followed by a joint training phase with the ASDN network, leveraging the pre-trained classifier.