Expected Sliced Transport Plans

Authors: Xinran Liu, Rocio Diaz Martin, Yikun Bai, Ashkan Shahbazi, Matthew Thorpe, Akram Aldroubi, Soheil Kolouri

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate the connection between our approach and the recently proposed min-SWGG, along with illustrative numerical examples that support our theoretical findings. 3 EXPERIMENTS 3.4 INTERPOLATION We use the Point Cloud MNIST 2D dataset Garcia (2023), a reimagined version of the classic MNIST dataset (Le Cun, 1998), where each image is represented as a set of weighted 2D point clouds instead of pixel values. In Figure 5, we illustrate the interpolation between two point clouds that represent digits 7 and 6. 3.5 WEAK CONVERGENCE For the experiment, µ and ν are chosen to be discrete measures with N particles of uniform mass, sampled from two Gaussian distributions (see Figure 6, top). 3.6 TRANSPORT-BASED EMBEDDING Following the linear optimal transportation (LOT) framework, also referred to as the Wasserstein or transport-based embedding framework (Wang et al., 2013; Kolouri et al., 2021; Nenna & Pass, 2023; Bai et al., 2023; Martín et al., 2024), we investigate the application of our proposed transport plan in point cloud classification. Let µ0 = PN i=1 αiδxi denote a reference probability measure and let µk = PNk j=1 βk j δyk j denote a target probability measure. ...In this section, we use a reference measure with N particles of uniform mass to embed the digits from the Point Cloud MNIST 2D dataset using various transport plans. We then perform a logistic regression on the embedded digits and present the results in Figure 7.
Researcher Affiliation Academia Xinran Liu 1, Rocío Martín Díaz 2, Yikun Bai1, Ashkan Shahbazi1, Matthew Thorpe3, Akram Aldroubi4, Soheil Kolouri1 1Department of Computer Science, Vanderbilt University, Nashville, TN, 37235 2Department of Mathematics, Tufts University, Medford, MA 02155 3Department of Statistics, University of Warwick, Coventry, CV4 7AL, UK 4Department of Mathematics, Vanderbilt University, Nashville, TN, 37235
Pseudocode No The paper describes methods and theoretical findings but does not contain a clearly labeled pseudocode or algorithm block with structured steps.
Open Source Code No The paper does not contain an explicit statement about releasing source code or provide a link to a code repository for the described methodology.
Open Datasets Yes We use the Point Cloud MNIST 2D dataset Garcia (2023), a reimagined version of the classic MNIST dataset (Le Cun, 1998), where each image is represented as a set of weighted 2D point clouds instead of pixel values. ... To further demonstrate the efficiency of our proposed method for the classification task, we consider the widely used benchmark dataset in 3D computer vision and geometric deep learning, Model Net40 (Wu et al., 2015).
Dataset Splits Yes This dataset consists of objects represented as 3D point clouds, with 2048 points per object. It contains 40 different object categories. The training set comprises 9,840 samples, and the test set includes 2,468 samples.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes For the table, the regularization parameter for entropic OT is set to λ = 10, and for EST, the temperature is set to τ = 0 with L = 128 slices. ... In these experiments, for all methods, the reference measure µ0 is chosen as the uniform measure on the cube [ 1, 1]3, and we use a 1NN classifier with Euclidean distance in the embedding space. ... 1NN Classification Sinkhorn Sinkhorn ESP ESP OT λ = 10 λ = 1 L = 128 L = 1024 (LP)