On Scaling Up 3D Gaussian Splatting Training

Authors: Hexu Zhao, Haoyang Weng, Daohan Lu, Ang Li, Jinyang Li, Aurojit Panda, Saining Xie

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations using large-scale, high-resolution scenes show that Grendel enhances rendering quality by scaling up 3DGS parameters across multiple GPUs. On the 4K Rubble dataset, we achieve a test PSNR of 27.28 by distributing 40.4 million Gaussians across 16 GPUs, compared to a PSNR of 26.28 using 11.2 million Gaussians on a single GPU.
Researcher Affiliation Academia Hexu Zhao1, Haoyang Weng1 , Daohan Lu1 , Ang Li2, Jinyang Li1, Aurojit Panda1, Saining Xie1 1New York University 2Pacific Northwest National Laboratory
Pseudocode Yes We show the pseudocode (Algorithm 1) for calculating the Division Points to split an image into load-balanced subsequences of blocks.
Open Source Code Yes Grendel is an open-source project available at: https://github.com/nyu-systems/Grendel-GS
Open Datasets Yes On the 4K Rubble dataset, we achieve a test PSNR of 27.28 by distributing 40.4 million Gaussians across 16 GPUs... Dataset: Rubble Resolution: 4K #Gaussians: 40,000,000 Training: 16 GPUs/BS=16... The standard Rubble dataset (Turki et al., 2022) contains 1657 images... Matrix City Block_All (Li et al., 2023)... Tanks & Temple (Knapitsch et al., 2017)... Deep Blending (Hedman et al., 2018)... Mip-Ne RF 360 (Barron et al., 2022)
Dataset Splits Yes Table 1: Scenes used in our evaluation: We cover scenes of varying sizes and resolutions. Tanks & Temple (Knapitsch et al., 2017) ... Test Set Setting: 1/8 of all images Deep Blending (Hedman et al., 2018) ... Test Set Setting: 1/8 of all images Mip-Ne RF 360 (Barron et al., 2022) ... Test Set Setting: 1/8 of all images Rubble (Turki et al., 2022) ... Test Set Setting: official test set Matrix City Block_All (Li et al., 2023) ... Test Set Setting: official test set
Hardware Specification Yes Experimental Setup. We conducted our evaluation in the Perlmutter GPU cluster NERSC. Each node we used was equipped with 4 A100 GPUs with 40GB of GPU memory, and interconnected with each other using 25GB/s NVLink per direction. Servers were connected to each other using a 200Gbps Slingshot network.
Software Dependencies No The paper does not explicitly provide specific version numbers for software dependencies such as Python, PyTorch, or CUDA.
Experiment Setup Yes We explore various optimization hyperparameter scaling strategies and find that a simple sqrt(batch_size) scaling rule is highly effective... To maintain data efficiency and reconstruction quality with larger batches, one needs to re-tune optimizer hyperparameters. To this end, we introduce an automatic hyperparameter scaling rule for batched 3DGS training based on a heuristical independent gradients hypothesis... Table 4: Scalablity on Rubble: Gaussian Quantity, Results and Hyperparameter Settings... Table 5: Matrix City Block_All Statistics: Gaussian Quantity, Results and Hyperparameter Settings... Table 6: Bicycle Statistics: Gaussian Quantity, Results and Hyperparameter settings