Towards Marginal Fairness Sliced Wasserstein Barycenter

Authors: Khai Nguyen, Hai Nguyen, Nhat Ho

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct experiments on finding 3D point-clouds averaging, color harmonization, and training of sliced Wasserstein autoencoder with class-fairness representation to show the favorable performance of the proposed surrogate MFSWB problems.
Researcher Affiliation Collaboration Khai Nguyen Department of Statistics and Data Sciences University of Texas at Austin Austin, TX 78713, USA EMAIL Qualcomm AI Research EMAIL Nhat Ho Department of Statistics and Data Sciences University of Texas at Austin Austin, TX 78713, USA EMAIL
Pseudocode Yes We refer the reader to Algorithm 1 in Appendix B for more detail. Specifically, we now discuss the discrete SWB i.e., marginals and the barycenter are discrete measures. We refer the reader to Algorithm 2 in Appendix B for the gradient estimation and optimization procedure. We refer the reader to Algorithm 3 in Appendix B for the gradient estimation and optimization procedure. We refer the reader to Algorithm 4 in Appendix B for the gradient estimation and optimization procedure.
Open Source Code Yes 1Code for the paper is published at https://github.com/khainb/MFSWB.
Open Datasets Yes We select two point-cloud shapes which consist of 2048 points in Shape Net Core-55 dataset (Chang et al., 2015). We train the autoencoder on MNIST dataset (Le Cun et al., 1998) (d = 28 28) with κ1 = 8.0, κ2 = 0.5, 250 epochs, using a uniform distribution on a 2D ball (h = 2) as µ0 with differnt learning rates: {0.0001, 0.0005, 0.0008, 0.001} and do grid search on each method, reporting their best score for each metric. We evaluate the scalability of our method using two well-established datasets: CIFAR10 (Krizhevsky et al., 2009) (d = 32 32 3) and STL10 (Coates et al., 2011) (d = 64 64 3).
Dataset Splits Yes We train the autoencoder on MNIST dataset (Le Cun et al., 1998) (d = 28 28)... Following the training phase, we evaluate the trained autoencoders on the test set.
Hardware Specification Yes For the Gaussian simulation, point-cloud averaging, and color harmonization, we use a HP Omen 25L desktop for conducting experiments. Additionally, for the Sliced Wasserstein Autoencoder with class-fair representation experiment, we employ the NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions 'RMSprop optimizer with learning rate 0.01, alpha=0.99, eps=1e 8' and 'stochastic gradient descent' but does not specify software names with version numbers for libraries or frameworks used (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes We use stochastic gradient descent with 50000 iterations of learning rate 0.01, the number of projections 100. We use stochastic gradient descent with 10000 iterations of learning rate 0.01, the number of projections 10. We then minimize barycenter losses i.e., USWB, MFSWB (λ {0.1, 1, 10}), s-MFSWB, us-MFSWB, and es-MFSWB by using stochastic gradient descent with the learning rate 0.0001 and 20000 iterations. We train the autoencoder on MNIST dataset (Le Cun et al., 1998) (d = 28 28) with κ1 = 8.0, κ2 = 0.5, 250 epochs, using a uniform distribution on a 2D ball (h = 2) as µ0 with differnt learning rates: {0.0001, 0.0005, 0.0008, 0.001} and do grid search on each method, reporting their best score for each metric. We use the RMSprop optimizer with learning rate 0.01, alpha=0.99, eps=1e 8. For these experiments, we set κ1 = 8.0, κ2 = 0.5, and train for 500 epochs with a learning rate of 0.0005. The CIFAR10 experiment uses a uniform distribution on a 48-dimensional ball (h = 48), while the STL10 experiment uses a 128-dimensional ball (h = 128).