SecureGS: Boosting the Security and Fidelity of 3D Gaussian Splatting Steganography
Authors: Xuanyu Zhang, Jiarui Meng, Zhipei Xu, Shuzhou Yang, Yanmin Wu, Ronggang Wang, Jian Zhang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Secure GS significantly surpasses existing GS steganography methods in rendering fidelity, speed, and security. |
| Researcher Affiliation | Academia | 1School of Electronic and Computer Engineering, Peking University 2Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology, Shenzhen Graduate School, Peking University |
| Pseudocode | Yes | Algorithm 1 Overall Pipeline of Our Proposed Region-Aware Density Optimization |
| Open Source Code | No | The paper does not explicitly state that source code is provided or offer a link to a code repository. |
| Open Datasets | Yes | For 3D object and 2D image hiding, the original scene includes the bicycle (BI.), flowers (FL.), garden (GA.), stump (ST.), treehill (TR.), room (RO.), counter (CO.), kitchen (KI.), bonsai (BO.) from Mip-Ne RF 360 (Barron et al., 2021). The hidden 3D object is obtained from the Blender dataset (Mildenhall et al., 2020). |
| Dataset Splits | No | The paper mentions "training views" and rendering the "training set of the hidden object" but does not specify exact percentages, counts, or predefined splits for training, validation, or test sets. |
| Hardware Specification | Yes | We conduct all our experiments on the NVIDIA RTX 4090Ti server and use the same rasterizer as the original 3DGS. |
| Software Dependencies | No | The paper states "We use the same rasterizer as the original 3DGS" but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the implementation. |
| Experiment Setup | Yes | λ is set to 10 when hiding 3D objects and set to 0.1 when hiding a single image. α and β in Eq. 8 are respectively set to 0.2 and 0.01. τfix and rdown are respectively set to 0.0002 and 4. We consistently set k = 10 across all experiments and the MLPs used in our approach consist of 2 layers with ReLU activations with each hidden layer having 32 units. |