Learning Elastic Costs to Shape Monge Displacements

Authors: Michal Klein, Aram-Alexandre Pooladian, Pierre Ablin, Eugene Ndiaye, Jonathan Niles-Weed, Marco Cuturi

NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the soundness of our procedure on synthetic data, generated using our first contribution, in which we show near-perfect recovery of A s subspace using only samples. We demonstrate the applicability of this method by showing predictive improvements on single-cell data tasks.
Researcher Affiliation Collaboration Michal Klein Apple EMAIL Aram-Alexandre Pooladian NYU EMAIL Pierre Ablin Apple EMAIL Eugène Ndiaye Apple EMAIL Jonathan Niles-Weed NYU EMAIL Marco Cuturi Apple EMAIL
Pseudocode Yes Algorithm 1 MBO-ESTIMATOR(X, Y; γ, τ, ε) ... Algorithm 2 GROUND-TRUTH OT MAP T h g ... Algorithm 3 RECOVER-THETA: (X, Y; γ, θ0)
Open Source Code No We will release the entire codebase for experiments in coming weeks, as python notebooks/tutorials.
Open Datasets Yes using single-cell RNA sequencing data from [Srivatsan et al., 2020].
Dataset Splits Yes We then use 80% train/20% test folds to benchmark two MBO estimators
Hardware Specification No Although no claim is made in terms of compute performance, the fairly small scale of the experiments allows to execute these runs on a single GPU.
Software Dependencies No In practice, we use the JAXOPT [Blondel et al., 2021] library to run proximal gradient descent. ... Our code implements a parameterized Reg TICost class, added to OTT-JAX [Cuturi et al., 2022]. ... We plot the Sinkhorn divergence (cf. Feydy et al. [2019]) for the ℓ2 2 cost for reference (see the documentation in OTT-JAX [Cuturi et al., 2022]).
Experiment Setup Yes We report performance after 1000 iterations of Riemannian gradient descent, with a step-size η of 0.1/ i + 1 at iteration i.