MANTRA: The Manifold Triangulations Assemblage
Authors: Rubén Ballester, Ernst Roell, Daniel Bin Schmid, Mathieu Alain, Sergio Escalera, Carles Casacuberta, Bastian Rieck
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To address this gap, we introduce MANTRA, the first large-scale, diverse, and intrinsically higher-order dataset for benchmarking higher-order models, comprising over 43,000 and 250,000 triangulations of surfaces and three-dimensional manifolds, respectively. With MANTRA, we assess several graph- and simplicial complex-based models on three topological classification tasks. We demonstrate that while simplicial complex-based neural networks generally outperform their graph-based counterparts in capturing simple topological invariants, they also struggle, suggesting a rethink of TDL. Thus, MANTRA serves as a benchmark for assessing and advancing topological methods, paving the way towards more effective higher-order models. |
| Researcher Affiliation | Academia | 1Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Spain 2AIDOS Lab, University of Fribourg, Switzerland 3Institute of AI for Health, Helmholtz Munich, Germany 4Technical University of Munich, Germany 5Centre for Artificial Intelligence, University College London, UK 6Computer Vision Center, Spain |
| Pseudocode | No | The paper describes model details in Appendix C under "MODEL DETAILS" but does not contain any structured pseudocode or algorithm blocks. It describes the message-passing paradigm generally without presenting specific algorithms implemented by the authors in a pseudocode format. |
| Open Source Code | Yes | We make the dataset and benchmark code available via two repositories: (i) https://github.com/aidos-lab/MANTRA (ii) https://github.com/aidos-lab/mantra-benchmarks These repositories contain (i) the raw and processed datasets, and (ii) the code to reproduce all our results. |
| Open Datasets | Yes | We make the dataset and benchmark code available via two repositories: (i) https://github.com/aidos-lab/MANTRA (ii) https://github.com/aidos-lab/mantra-benchmarks [...] Both formats, raw and processed, are versioned using the Semantic Versioning 2.0.0 convention (Preston-Werner) and are also available via Zenodo,1 thus ensuring reproducibility and clear tracking of the dataset evolution. 1https://doi.org/10.5281/zenodo.14103581 |
| Dataset Splits | Yes | Due to the high imbalance in the datasets for most labels, we performed stratified train/validation/test splits for each task individually, with 60/20/20 percentage of the data for each split, respectively. Splits were generated using the same random seed for each run, ensuring that the same splits are used across all configurations. |
| Hardware Specification | No | The paper mentions 'Due to computational limitations' and discusses 'A significant computational bottleneck' but does not provide specific details on the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The Py Torch Geometric (Fey & Lenssen, 2019, Py G) version is available as a Python package that can be installed using the command pip install mantra-dataset. Docker images and workflow, together with package dependencies are included to ensure a unique environment across different machine configurations." (No specific version numbers are given for PyTorch Geometric or other dependencies.) |
| Experiment Setup | Yes | To ensure fairness, all configurations use the same learning rate of 0.01 and the same number of epochs of 6... All models were trained using the Adam optimizer. Hyperparameter details can be found in Appendix D. |