Multi-StyleGS: Stylized Gaussian Splatting with Multiple Styles
Authors: Yangkai Lin, Jiabao Lei, Kui Jia
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As demonstrated by our comprehensive experiments, our approach outperforms existing ones in producing plausible stylization results and offering flexible editing. Extensive experiments conducted on various datasets (Knapitsch et al. 2017; Mildenhall et al. 2019) substantiate the efficacy of our method in generating high-quality, locally matched stylized images in real-time. |
| Researcher Affiliation | Academia | 1South China University of Technology 2School of Data Science, The Chinese University of Hong Kong, Shenzhen EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in prose, detailing the pipeline, Gaussian Splatting with Semantic Features, Preliminary of Style Loss, and Semantic Multi-style Loss, but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an unambiguous statement about releasing code, nor does it include any links to a code repository. |
| Open Datasets | Yes | We conducted extensive experiments on a diverse set of real-world scenes, including outdoor environments from the Tanks and Temples (shortened as tnt in our paper) dataset (Knapitsch et al. 2017) and forward-facing scenes from the llff dataset (Mildenhall et al. 2019). |
| Dataset Splits | No | The paper mentions using 'tnt datasets' and 'llff dataset' but does not specify exact training, validation, or testing splits, percentages, or sample counts. |
| Hardware Specification | Yes | Our novel semantic style loss can achieve memory-efficient training, which enables efficient training on a single RTX 3090. |
| Software Dependencies | No | The paper mentions models like VGG19, DINOv2, and SAM, but does not provide specific version numbers for programming languages, libraries, or other software dependencies used for implementation. |
| Experiment Setup | Yes | Our GS model is trained with Lrecon+λseg Lseg+λKNNLKNN+λNELNE+λmask Lmask, (13) where Lrecon is the Mean Squared Error (MSE) reconstruction loss as outlined in (Kerbl et al. 2023). We typically assign values of λseg = 0.02, λKNN = 0.005 , λNE = 0.005. |