Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting

Authors: Yu Liu, Baoxiong Jia, Ruijie Lu, Junfeng Ni, Song-Chun Zhu, Siyuan Huang

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world datasets, including a new benchmark for complex multi-part objects, demonstrate that Art GS achieves state-of-the-art performance in joint parameter estimation and part mesh reconstruction. Our approach outperforms existing methods in both synthetic and real-world scenarios, with significant improvements in axis modeling and overall efficiency. Through extensive experiments, we demonstrate the effectiveness of our model in efficiently delivering high-quality reconstruction of complex multi-part articulated objects. We also provide comprehensive analyses of our design choices, highlighting the critical role of these modules and identifying areas for future improvement.
Researcher Affiliation Academia 1Tsinghua University 2State Key Laboratory of General Artificial Intelligence, BIGAI 3Peking University
Pseudocode No The paper describes the method and optimization steps using mathematical equations and textual explanations, but does not present any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our work is made publicly available at: https://articulate-gs.github.io.
Open Datasets Yes We evaluate our method on three datasets: (1) PARIS, a two-part dataset proposed by Liu et al. (2023a)... (2) DTA-Multi, a dataset proposed by Weng et al. (2024)... (3) Art GS-Multi, our newly curated dataset, featuring 5 complex articulated objects from Part Net-Mobility (Xiang et al., 2020).
Dataset Splits No The paper discusses evaluation metrics and averaging results over multiple trials (e.g., 'mean std over 10 trials'), but it does not specify explicit training, validation, or test dataset splits in terms of percentages, counts, or predefined files.
Hardware Specification Yes We re-train DTA on the same device (NVIDIA RTX 3090) for training time comparison.
Software Dependencies No The paper mentions using Open3D for mesh extraction and refers to general training steps and loss functions that imply common deep learning frameworks, but it does not specify any software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8, CUDA 11.1).
Experiment Setup Yes We train single-state Gaussians G0 and G1 for 10K steps with loss L p1 λSSIMq LI λSSIMLD-SSIM λo Lo, where λSSIM 0.2, λo 0.01 is used in experiments... we anneal the temperature τ from 1 to 0.1 over 10K steps... We train Art GS with joint type constraint for 20K steps... For hyper-parameters, we set the threshold ϵstatic to identify static/movable Gaussians as ϵstatic 0.02 maxi CDtÑ t i for two-part objects and ϵstatic 0.05 maxi CDtÑ t i for multi-part objects. We use ϵrevol 100 for predicting joint types... λcd and λreg are set as 100 and 0.1 separately. In addition... we raised this threshold ϵdensify from 0.0002... to 0.001.