SE(3)-Equivariant Diffusion Models for 3D Object Analysis

Authors: Xie Min, Zhao Jieyu, Shen Kedi, Chen Kangxin

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on the Breaking Bad dataset, a real-world Re PAIR and a self-constructed 3D mannequin dataset demonstrate the effectiveness of the proposed model, outperforming state-of-the-art methods across metrics such as root mean square error and part accuracy. Ablation studies further validate the critical contributions of key modules, emphasizing their roles in improving accuracy and robustness for 3D part reassembly tasks.
Researcher Affiliation Academia Xie Min , Zhao Jieyu , Shen Kedi and Chen Kangxin Ningbo University EMAIL, zhao EMAIL, skuld EMAIL, chen EMAIL
Pseudocode No The paper describes the proposed method using mathematical formulations and textual explanations within sections like '3 Equivariant Diffusion Models' and its subsections ('3.2 Lie Algebra Mapping', '3.3 Elastic Diffusion Models'). However, it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We evaluate our approach on the public 3D part dataset Breaking Bad [Sell an et al., 2022], a real-world archaeological dataset Re PAIR [Tsesmelis et al., 2024] and a self-built dataset of fragmented 3D mannequin.
Dataset Splits No The paper mentions using specific subsets and categories from the Breaking Bad dataset (e.g., 'everyday subset and 6 artifact categories') and creating 'five subsets with varying fragment proportions' for the 3D mannequin dataset. However, it does not provide specific training/test/validation splits (e.g., percentages, sample counts, or references to standard splits with citations) needed for reproduction.
Hardware Specification Yes All experiments are performed on a Linux workstation equipped with an NVIDIA RTX 4090 GPU.
Software Dependencies No The paper mentions using 'Noise Conditional Score Networks (NCSNs) [Song and Ermon, 2019], [Croitoru et al., 2023] as the backbone of the diffusion models' and 'the Adam optimizer', but it does not specify any version numbers for these or other software libraries/frameworks.
Experiment Setup Yes Optimization is performed using the Adam optimizer with an initial learning rate of 1e-3 and a cosine learning schedule. The batch size is set to 16, and the total number of iterations is 2000, with a consistent input size N = 30. To further improve performance, we incorporate a Chamfer Distance loss term, as proposed in [Sell an et al., 2022], which enhances the model s capability to minimize geometric discrepancies.