Topology-Aware 3D Gaussian Splatting: Leveraging Persistent Homology for Optimized Structural Integrity

Authors: Tianqi Shen, Shaohua Liu, Jiaqi Feng, Ziye Ma, Ning An

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on three novel-view synthesis benchmarks demonstrate that Topology-GS outperforms existing methods in terms of PSNR, SSIM, and LPIPS metrics, while maintaining efficient memory usage.
Researcher Affiliation Collaboration 1Department of Computer Science, City University of Hong Kong 2Image Processing Center, Beihang University 3Shen Yuan Honors College, Beihang University 4Research Institute of Mine Artificial Intelligence, China Coal Research Institute 5State Key Laboratory of Intelligent Coal Mining and Strata Control EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes The pseudocode is presented in Algorithm 1. To further illustrate the ideas behind LPVI, we break down the procedures in Algorithm 1 and explain them step by step.
Open Source Code Yes Code https://github.com/Amadeus STQ/Topology-GS
Open Datasets Yes Datasets. To evaluate the proposed method, consistent with 3D-GS (Kerbl et al. 2023), we utilized 11 scenes: seven from the Mip-Ne RF360 dataset (Barron et al. 2022), two from the Tanks & Temples dataset (Knapitsch et al. 2017), and two from the Deep Blending dataset (Hedman et al. 2018).
Dataset Splits No The paper mentions evaluating on "Mip-Ne RF360 dataset (Barron et al. 2022)", "Tanks & Temples dataset (Knapitsch et al. 2017)", and "Deep Blending dataset (Hedman et al. 2018)", but does not provide specific training/test/validation splits within the paper. It refers to consistency with 3D-GS but doesn't detail the splits used for its own experiments.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as GPU or CPU models.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers, such as library or solver names.
Experiment Setup No The paper describes the methodology and loss functions, including Pers Loss, and mentions the ADC stage, but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations in the main text.