Stochasticity-aware No-Reference Point Cloud Quality Assessment

Authors: Songlin Fan, Wei Gao, Zhineng Chen, Ge Li, Guoqing Liu, Qicheng Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments indicate that our approach outperforms previous cutting-edge methods by a large margin and exhibits gratifying crossdataset robustness. Codes are available at https: //git.openi.org.cn/Open Point Cloud/nrpcqa. Section 4, titled "Experiments", details the implementation, datasets, evaluation metrics, comparisons, generalization analyses, and ablation studies, all of which involve empirical validation.
Researcher Affiliation Collaboration 1Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, Shenzhen Graduate School, Peking University; 2School of Computer Science, Fudan University; 3China Mobile Shanghai ICT Co., Ltd; 4Youiia Innov Tech Co., Ltd. The affiliations include academic institutions (Peking University, Fudan University) and industry companies (China Mobile Shanghai ICT Co., Ltd, Youiia Innov Tech Co., Ltd), indicating a collaboration.
Pseudocode No The paper describes the proposed method in Section 3 and illustrates its architecture with figures (Figure 2, Figure 3, Figure 4) and textual descriptions, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https: //git.openi.org.cn/Open Point Cloud/nrpcqa.
Open Datasets Yes Our experiments utilize three widely-used PCQA datasets, including SJTU-PCQA [Yang et al., 2020a], WPC [Liu et al., 2022a], and WPC2.0 [Liu et al., 2021a].
Dataset Splits Yes Specifically, we employ 9-fold, 5-fold, and 4fold cross-validation for SJTU-PCQA, WPC, and WPC2.0, respectively, resulting in an approximate 8:2 split between training and testing sets.
Hardware Specification Yes We implement our model on two NVIDIA RTX 3090 Ti GPUs with the Py Torch toolbox and initialize the Res Net50 [He et al., 2016] backbone in the QRG with parameters pre-trained on Image Net, while other neural network parameters are randomly initialized.
Software Dependencies No We implement our model on two NVIDIA RTX 3090 Ti GPUs with the Py Torch toolbox. The paper mentions PyTorch but does not specify a version number for it or any other key software libraries or solvers.
Experiment Setup Yes We use the Adam optimizer with an initial learning rate of 2.5e-5 and betas set to [0.5, 0.999]. Our model is trained for a total of 200 epochs, and the learning rate is reduced by a factor of 0.5 when the training process reaches the halfway mark. We set the training batch size to 8 while convincing ablations demonstrate that the weighting term α = 0.4 to emphasize the disparity reduction between the training and testing stages can obtain the best performance. The spatial resolution of point cloud projections is 480 480, and experiments reveal that the projection number Nv = 4 can achieve the best balance between prediction efficiency and accuracy. We take the dimension of the latent variable K1 = 3 and the channel size of intermediate features K2 = 32 for a light computation overhead.