Lightweight Predictive 3D Gaussian Splats
Authors: Junli Cao, Vidit Goel, Chaoyang Wang, Anil Kag, Ju Hu, Sergei Korolev, Chenfanfu Jiang, Sergey Tulyakov, Jian Ren
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method using seven scenes from the Mip-Ne RF 360 dataset (Barron et al., 2022b), two scenes from Tank&Temples (Knapitsch et al., 2017), and two scenes from Deep Blending (Hedman et al., 2018). We use the widely adopted metrics like PSNR, SSIM (Wang et al., 2004), and LPIPS (Zhang et al., 2018) to assess the quality for image reconstruction. We also report the storage size (in MB) for various methods along with their on-device capabilities. |
| Researcher Affiliation | Collaboration | Junli Cao1,2 Vidit Goel 2 Chaoyang Wang2 Anil Kag2 Ju Hu2 Sergei Korolev2 Chenfanfu Jiang1 Sergey Tulyakov2 Jian Ren2 1University of California, Los Angeles 2 Snap, Inc. |
| Pseudocode | Yes | Algorithm 1 AABB Estimation and Contraction and Algorithm 2 View Frustum Culling |
| Open Source Code | No | The paper does not provide explicit statements about open-source code release, a link to a code repository, or mention of code in supplementary materials. |
| Open Datasets | Yes | We evaluate our method using seven scenes from the Mip-Ne RF 360 dataset (Barron et al., 2022b), two scenes from Tank&Temples (Knapitsch et al., 2017), and two scenes from Deep Blending (Hedman et al., 2018). |
| Dataset Splits | No | The paper mentions using well-known datasets but does not explicitly provide details on how these datasets were split into training, validation, and test sets for their experiments. It only mentions downsampling for warm-up stages. |
| Hardware Specification | Yes | We benchmark the Gaussian Splatting based methods on i Phone 14 with our implementation of the mobile application. ... We report two large-scale complex scenes Bicycle and Garden from Barron et al. (2022a) on Nvidia A100... |
| Software Dependencies | No | The paper mentions using Instant-NGP and MLPs but does not specify version numbers for any software libraries, frameworks, or programming languages used in the implementation. |
| Experiment Setup | Yes | For the hash grid, we start with a learning rate of 2e-3 and end with a rate of 2e-5. For opacity, we start with 1e-3 and end with 2e-5. The scale and rotation parameters utilize a constant learning rate of 1e-4. Additionally, we maintain a constant learning rate of 2e-4 for the attention module. ... We set λ = 0.5 for all the experiments and train the model for 30K steps with 7.5K steps warm-up stage. |