Unsupervised Anomaly Detection through Mass Repulsing Optimal Transport

Authors: Eduardo Fernandes Montesuma, EL HABAZI Adel, Fred Maurice NGOLE MBOULA

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through a series of experiments in existing benchmarks, and fault detection problems, we show that our algorithm improves over existing methods. Our code is publicly available at https: //github.com/eddardd/MROT
Researcher Affiliation Academia Eduardo Fernandes Montesuma EMAIL Université Paris-Saclay, CEA, List, F-91120 Palaiseau France; Adel el Habazi* EMAIL École Centrale de Nantes, Nantes, France; Fred Ngolè Mboula EMAIL Université Paris-Saclay, CEA, List, F-91120 Palaiseau France
Pseudocode Yes Algorithm 1: Mass Repulsive Optimal Transport.
Open Source Code Yes Our code is publicly available at https: //github.com/eddardd/MROT
Open Datasets Yes We divide our experiments in 3 parts. Section 4.1 shows our results on Ad Bench (Han et al., 2022). Section 4.2 shows our experiments in fault detection on the Tennessee Eastman Process (Montesuma et al., 2024b; Reinartz et al., 2021). Furthermore, from Figure 9 (b), we see that our method struggles in high dimensional AD, such as those in the NLP datasets of Han et al. (2022).
Dataset Splits Yes In this experiment, we downsample the number of anomalous samples per fault category to {5, 10, , 30}. This results in a percentage of {4.45%, 8.53%, 12.28%, 15.73%, 18.92%, 21.87%} of anomalous samples. In Figure 11, we report our aggregated results over all percentage of anomalies.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments. It mentions computational complexity but no specific CPU/GPU models or configurations.
Software Dependencies No The paper mentions tools like POT (Python Optimal Transport), XGBoost, and algorithms like Simplex and Sinkhorn, but it does not specify version numbers for any software dependencies.
Experiment Setup Yes Here, we analyze the robustness of our method with respect to the entropic regularization penalty ϵ, and the number of nearest neighbors k in Nk. In our experiments, we evaluated our method on the values ϵ {0, 10 2, 10 1, 100}, where ϵ = 0 implies the use of exact OT, that is, linear programming. For MROT, we use k {5, 10, 20, , 50}.