MMD-Regularized Unbalanced Optimal Transport

Authors: Piyushi Manupriya, SakethaNath Jagarlapudi, Pratik Jawanpuria

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that MMD-UOT consistently outperforms popular baselines, including KL-regularized UOT and MMD, in diverse machine learning applications. ... 5 Experiments In Section 4, we examined the theoretical properties of the proposed MMD-UOT formulation. In this section, we show that MMD-UOT is a good practical alternative to the popular entropy-regularized ϵKL-UOT.
Researcher Affiliation Collaboration Piyushi Manupriya EMAIL Department of Computer Science and Engineering, IIT Hyderabad, INDIA. J. Saketha Nath EMAIL Department of Computer Science and Engineering, IIT Hyderabad, INDIA. Pratik Jawanpuria EMAIL Microsoft, INDIA.
Pseudocode Yes Algorithm 1 Accelerated Projected Gradient Descent for solving Problem (10). Require: Lipschitz constant L, initial α0 0 Rm m.
Open Source Code Yes Our codes are publicly available at https://github.com/Piyushi-0/MMD-reg-OT.
Open Datasets Yes Dataset and experimental setup. Following (Liu et al., 2020), we consider the two sets of samples, one from the true MNIST (Le Cun & Cortes, 2010) and another from fake MNIST generated by the DCGAN (Bian et al., 2019). ... We perform the domain adaptation experiment with and Digits datasets comprising of MNIST (Le Cun & Cortes, 2010), M-MNIST (Ganin et al., 2016), SVHN (Netzer et al., 2011), USPS (Hull, 1994) datasets. ... Office-Home dataset: We evaluate the proposed method on the Office-Home dataset (Venkateswara et al., 2017)... Vis DA-2017 dataset: We next consider the next domain adaptation task between the training and validation sets of the Vis DA-2017 (Recht et al., 2018) dataset. ... Euro SAT (Helber et al., 2018) dataset consisting of satellite images, DTD (Cimpoi et al., 2014) dataset having images of textures and Oxford-Pets (Parkhi et al., 2012) dataset having images of pets.
Dataset Splits Yes We take an increasing number of samples (N) and compute the average test power over 100 pairs of sets for each value of N. ... The training is done on 1,000 images from each dataset, and the test is on 1,031 images. ... The task of learning prompts ... for the K-shot recognition task in which only K images per class are available during training.
Hardware Specification Yes This experiment was done on an NVIDIA-RTX 2080 GPU.
Software Dependencies No As the POT library (Flamary et al., 2021) doesn t allow including a simplex constraint for KL-UOT, we do not show this. (The paper mentions the POT library but does not specify its version number or any other software dependencies with version numbers.)
Experiment Setup Yes The hyperparameters for MMD-UOT are λ as 100 and σ2 in the RBF kernel (k(x, y) = exp x y 2 / (2σ2)) as 1. The hyperparameters for ϵKL-UOT are λ and ϵ as 1. ... The hyperparameters were tuned for N = 100 for each trial. ... σ was chosen from {median, 40, 60, 80, 100} ... λ is chosen from {0.1, 1, 10}. For ϵKL-UOT, ϵ was chosen from {1, 10^-1, 10^-2, 10^-3, 10^-4}.