HQGS: High-Quality Novel View Synthesis with Gaussian Splatting in Degraded Scenes
Authors: Xin Lin, Shi Luo, Xiaojun Shan, Xiaoyu Zhou, Chao Ren, Lu Qi, Ming-Hsuan Yang, Nuno Vasconcelos
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 EXPERIMENTS We first describe the utilized datasets and present the implementation details. Next, we provide a comprehensive analysis of the experimental results, qualitatively and quantitatively. Finally, we conduct ablation studies to validate the effectiveness of the proposed modules and the robustness of our HQGS compared to other approaches. |
| Researcher Affiliation | Collaboration | Xin Lin1,2, Shi Luo3, Xiaojun Shan1 Xiaoyu Zhou4 Chao Ren3 Lu Qi2 Ming-Hsuan Yang5 Nuno Vasconcelos1 1UCSD 2Insta360 Research 3Sichuan University 4Peking University 5UC Merced |
| Pseudocode | No | The paper describes the proposed method using narrative text, mathematical equations (e.g., Eq. 1-7), and figures, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source code and trained models are publicly available at: https://github.com/linxin0/HQGS. |
| Open Datasets | Yes | Datasets. We evaluate the proposed HQGS pipeline on two datasets: (1) The LLFF dataset (Mildenhall et al., 2019), which contains real-world images from eight distinct scenes, with each scene comprising 20 to 62 images. Of these, 1/4 are reserved for testing, while the remaining are used for training. (2) A synthetic dataset derived from the Blender scenes used in Deblur Ne RF (Ma et al., 2022), where 1/8 of the data is used for testing and the other 7/8 for training. |
| Dataset Splits | Yes | Datasets. We evaluate the proposed HQGS pipeline on two datasets: (1) The LLFF dataset (Mildenhall et al., 2019), which contains real-world images from eight distinct scenes, with each scene comprising 20 to 62 images. Of these, 1/4 are reserved for testing, while the remaining are used for training. (2) A synthetic dataset derived from the Blender scenes used in Deblur Ne RF (Ma et al., 2022), where 1/8 of the data is used for testing and the other 7/8 for training. |
| Hardware Specification | Yes | All experiments are conducted on a single Nvidia Ge Force RTX 3090 GPU. |
| Software Dependencies | No | Our implementation is based on the 3DGS (Kerbl et al., 2023) framework. This specifies a base framework but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) to ensure reproducibility. |
| Experiment Setup | Yes | Implementation Details. Our implementation is based on the 3DGS (Kerbl et al., 2023) framework. The learning rate for the learnable parameters of 3D Gaussians follows the official settings, while the learning rate for the ESFG module is set to 1e-6. We evaluate our method using various metrics, including PSNR, SSIM, and LPIPS, following previous work (Zhou et al., 2023c; Kerbl et al., 2023; Feng et al., 2024). All experiments are conducted on a single Nvidia Ge Force RTX 3090 GPU. Following previous methods, we use the trained restoration network to get the target image for the results comparisons. The λ1 and λ2 in equation 7 are set to 1 and 5, respectively. ... all 3DGS-based models are trained for 50,000 iterations. |