Learning Structured Universe Graph with Outlier OOD Detection for Partial Matching
Authors: Zetian Jiang, Jiaxin Lu, Haizhao Fan, Tianzhe Wang, Junchi Yan
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated our method on the Pascal VOC and Willow Object datasets, focusing on scenarios involving point occlusion and random outliers. The experimental results demonstrate that our approach consistently outperforms state-of-the-art methods across all tested scenarios, highlighting the accuracy and robustness of our method. |
| Researcher Affiliation | Academia | Zetian Jiang1 , Jiaxin Lu3 , Haozhao Fan1, Tianzhe Wang1, Junchi Yan12 1Sch. of Computer Science & Sch. of Artificial Intelligence, Shanghai Jiao Tong University 2Shanghai Artificial Intelligence Laboratory 3Department of Computer Science, University of Texas at Austin EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 Universe Graph Matching Require: Input images I, input key points P, learnable parameters θ, learning rate η, epoch number E, margins min, mout, Temperature T, OOD threshold τ Ensure: Trained parameters θ and universe latent graph U 1: Initialize θ randomly 2: Initialize node and edge embedding of universe latent graph U randomly 3: for e = 1 to E do 4: for image pair Ii, Ij in train/test dataset do 5: # extract features 6: Extract key point feature F by CNN backbone via Eq. 14 7: Add class embedding to F via Eq. 7 8: Refine F with Spline Conv to obtain node N and edge feature E via Eq. 15 and Eq. 16 9: 10: # build affinity 11: Construct node affinity Kn and edge affinity Ke via Eq. 2 12: Calculate energy E for each node via Eq. 8 13: Filter out random outlier with OOD threshold τ via Eq. 10 14: 15: if training then 16: # loss and update 17: Calculate permutation loss with Kfiltered n and Kfiltered e for both Ii and Ij 18: Calculate energy loss with E, min, and mout via Eq. 9 for both Ii and Ij 19: Final loss L = Lpermutation + Lenergy, and compute gradient θL, UL 20: Update parameters: θ θ η θL 21: Update universe embedding: U U η UL 22: else 23: # build pairwise matching 24: Use LPMP solver to obtain universe matching Xiu, Xju via Eq. 11 and Eq. 12 25: Build pairwise matching by X = Xiu X ju 26: end if 27: end for 28: end for 29: return θ , U |
| Open Source Code | No | To report the performance of prior works, we adhere to the following principles: 1) If a prior work s experimental settings align with ours, we directly report the performance metrics provided in their paper. 2) If the experimental setting is unique to our work, we attempt to replicate the prior methods under their original experimental settings and then adapt them to our setting to obtain comparable performance metrics. However, we failed to replicate the methods of DLGM and URL because they did not release publicly available code. Therefore, we did not report their performance in some of the experiments (Table 2 and Table 3). |
| Open Datasets | Yes | Datasets We evaluate our method on Pascal VOC (Everingham et al., 2010) and Willow Object Class (Cho et al., 2013), two widely recognized datasets. |
| Dataset Splits | Yes | The Pascal VOC (Everingham et al., 2010) with Berkeley annotations (Bourdev & Malik, 2009) contains images with bounding boxes surrounding objects of 20 classes... We follow prior works to split the train and test dataset, where training data includes 7,020 images and test data includes 1,682 images. The Willow Object Class (Cho et al., 2013)... We choose 20 images from each category as our training dataset and leave others for evaluation. |
| Hardware Specification | Yes | All the models are trained on on a Linux workstation with i9-10920X CPU@3.50GHz CPU, one RTX3090, and 128GB RAM. |
| Software Dependencies | No | The paper mentions software components like 'VGG16', 'Res Net50', 'Image Net', 'Spline Conv', and 'LPMP solver' but does not provide specific version numbers for any of these. Without version numbers, the software dependencies are not reproducible. |
| Experiment Setup | Yes | In default, we train our UGM with hyper parameters min = 6, mout = 3, T = 1.0, τ = 4.5, η = 1e 3, and E = 15. |