Motion Decoupled 3D Gaussian Splatting for Dynamic Object Representation
Authors: Xiao Hu, Libo Long, Jochen Lang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experimental Evaluation We select five state-of-the-art 3D representation methods as comparators. ... The evaluation metrics follow the previous public benchmarks (Pumarola et al. 2021; Li et al. 2021). Specifically, Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), and VGG-based Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al. 2018) are used. |
| Researcher Affiliation | Academia | Xiao Hu, Libo Long, Jochen Lang University of Ottawa, Canada, EMAIL |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are present in the paper. The methodology is described in prose and mathematical formulas. |
| Open Source Code | Yes | Code https://github.com/haliphinx/M5D-GS |
| Open Datasets | Yes | Both the dataset with a total of ten scenes and the source files used for its creation are available open-source, allowing the community to further investigate severe motion understanding. Current public datasets (Pumarola et al. 2021; Li et al. 2021; Yan, Li, and Lee 2023) for dynamic scene representation usually contain only slight motion and deformation. |
| Dataset Splits | No | The paper introduces a novel dataset and augments existing ones, but it does not specify explicit training/validation/test splits (e.g., percentages, sample counts, or specific files) for its experiments within the main text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers needed to replicate the experiment. |
| Experiment Setup | No | The main constraints for the proposed M5D-GS still follow the original 3D-GS without additional loss for motion estimation. The overall constraints include a per-pixel L1 loss and a D-SSIM loss (Kerbl et al. 2023) LD SSIM. The loss function is Limg = L1 + λLD SSIM with λ as the loss coefficient. More details are available in the supplementary material. |