Linear Partial Gromov-Wasserstein Embedding

Authors: Yikun Bai, Abihith Kothapalli, Hengrong Du, Rocio Diaz Martin, Soheil Kolouri

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerically, we test our proposed LPGW embedding and LPGW distance in two experiments: shape retrieval and learning with transform-based embeddings. In both experiments, we observe that the LPGW-based approach can preserve the partial matching property of PGW while significantly improving computational efficiency. 4 Experiments
Researcher Affiliation Academia 1Department of Computer Science, Vanderbilt University 2Department of Mathematics, University of California, Irvine 3Department of Mathematics, Tufts University 1yikun.bai, abi.kothapalli, EMAIL EMAIL 3rocio.diaz EMAIL
Pseudocode No The paper describes numerical implementation details in Section 3.1 but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https: //github.com/mint-vu/Linearized_Partial_Gromov_Wasserstein.
Open Datasets Yes In this experiment, we apply the LPGW distance and the PGW distance to the 2D dataset presented in (Beier et al., 2022), consisting of 100 elliptical disks. [...] We use the 2D dataset presented in (Bai et al., 2024), consisting of 8 classes, each containing 20 shapes. The 3D dataset is given by (Pan et al., 2021), which provides 100-200 complete shapes in each of 16 different classes [...] We adopt the MNIST 2D point cloud dataset for this experiment.
Dataset Splits Yes Next, using the approach given by (Beier et al., 2022), we combine each distance matrix with a support vector machine (SVM), applying stratified 10-fold cross-validation. In each iteration of cross-validation, we train an SVM [...] Specifically, for each digit, we sample N1 = 500 point clouds per class from the training set and N2 = 100 point clouds per class from the testing set.
Hardware Specification Yes All experiments presented in this paper are conducted on a computational machine with an AMD EPYC 7713 64-Core Processor, 8 32GB DIMM DDR4, 3200 MHz, and an NVIDIA RTX A6000 GPU.
Software Dependencies No The paper mentions "Python OT (Flamary et al., 2021)" for the FW algorithm and the "scikitlearn package" for logistic regression, but it does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes Experiment setup. We represent each 2D shape as an mm-space Xi = (R2, 2, µi), where µi = Pn i=1 1 nδxi. We normalize each shape so that the largest pairwise distance in each mm-space is 1. Based on [Lemma E.2, Bai et al. (2023)], the largest possible choice of λ is given by 2λ = 1. We hence test λ {0.05, 0.08, 0.1, 0.3, 0.5}, and for each reference space, we compute the pairwise LPGW distances and compute the wall-clock time, MRE, and PCC. [...] For the SVM experiments, we use exp( σD) as the kernel for the SVM model. Here, we normalize the matrix D and choose the best σ {0.001, 0.01, 0.1, 1, 5, 8, 10, 100} for each method used in order to facilitate a fair comparison of the resulting performances. [...] For each test shape, we first randomly rotate or flip the shape, and we then corrupt the shape by adding uniformly distributed noise. The mass of each added point is 1n, where n is the number of points in the original shape, and the total mass of the added points is η {0, 0.1, 0.3, 0.5}.