Fourier Sliced-Wasserstein Embedding for Multisets and Measures

Authors: Tal Amir, Nadav Dym

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental NUMERICAL EXPERIMENTS In this section, we demonstrate how the theoretical strengths of our method manifest in practice. Specifically, we show that our method produces embeddings with superior distance preservation and improves performance in practical learning tasks. Comparison with PSWE... Learning to approximate the Wasserstein distance... Robustness to parameter reduction...
Researcher Affiliation Academia Tal Amir Faculty of Mathematics Technion Israel Institute of Technology Haifa, Israel EMAIL Nadav Dym Faculty of Mathematics and Faculty of Computer Science Technion Israel Institute of Technology Haifa, Israel EMAIL
Pseudocode No The paper describes the method and its computation through formulas and textual descriptions, but does not include a dedicated 'Pseudocode' or 'Algorithm' section or block with structured steps.
Open Source Code Yes 1Our code is available at https://github.com/tal-amir/FSW-embedding.
Open Datasets Yes The following evaluation datasets, kindly provided to us by the authors, were used: Three synthetic datasets noisy-sphere-3, noisy-sphere-6 and uniform, with random point-clouds in R3, R6 and R2; two real datasets Model Net-small and Model Net-large, with 3D point-clouds sampled from Model Net40 objects (Wu et al., 2015); and the gene-expression dataset RNAseq (Yao et al., 2021), with multisets in R2000.
Dataset Splits No The paper mentions using several datasets (e.g., Model Net40, RNAseq) and performing classification and distance approximation tasks, implying the use of data splits for evaluation. However, it does not explicitly state the specific percentages, sample counts, or methodology for creating training, validation, and test splits within the provided text.
Hardware Specification Yes Hardware All experiments were conducted on a single Nvidia A40 GPU.
Software Dependencies No The paper mentions using the 'ot.emd2() function of the Python Optimal Transport package (Flamary et al., 2021)' and a 'standard Py Torch implementation of Point Net (Xia, 2019)'. While software packages are named, specific version numbers for these or other dependencies are not provided.
Experiment Setup Yes In this experiment we used embedding dimensions m1 = m2 = 1000. The MLP consisted of three layers with a hidden dimension of 1000. With this choice of hyperparameters, our model has roughly 3 million learnable parameters and 5 million parameters in total... In addition, we used leaky-ReLU activations and no biases in Φ... We used fixed parameters for the first embedding E1 and learnable parameters for the second embedding E2.