Hodge-Aware Convolutional Learning on Simplicial Complexes
Authors: Maosheng Yang, Geert Leus, Elvin Isufi
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we corroborate the three principles by comparing with methods that either violate or do not respect them. Overall, this paper bridges learning on SCs with the Hodge theorem, highlighting its importance for rational and effective learning from simplicial data, and provides theoretical insights to convolutional learning on SCs. ... In Section 6, we validate our theoretical findings and highlight the effect of the three principles, the need for the Hodge-aware learning, as well as the stability, based on different simplicial tasks including recovering foreign currency exchange (forex) rates, predicting triadic and tetradic collaborations, and ocean current trajectories. |
| Researcher Affiliation | Academia | Maosheng Yang EMAIL Department of Intelligent Systems Delft University of Technology. Geert Leus EMAIL Department of Microelectronics Delft University of Technology. Elvin Isufi EMAIL Department of Intelligent Systems Delft University of Technology. |
| Pseudocode | No | The paper describes methods and procedures using mathematical formulations and descriptive text, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code or an algorithm. |
| Open Source Code | Yes | We refer to Learning_on_SCs for the reproducibility of our experiments. We also note that the proposed architecture is implemented in the Topo Model X framework (Hajij et al., 2024). |
| Open Datasets | Yes | Considering a coauthorship dataset (Ammar et al., 2018), we built an SC following Ebli et al. (2020)... We also consider the Global Drifter Program dataset3 localized around Madagascar. It consists of ocean drifters whose coordinates are logged every 12 hours. ... 3http://www.aoml.noaa.gov/envids/gld/ |
| Dataset Splits | Yes | For the 2-simplex prediction, we use the collaboration impact (the number of citations) to split the total set of triangles into the positive set TP = {t|[x2]t > 7} containing 1482 closed triangles and the negative set TN = {t|[x2]t <= 7} containing 1803 open triangles such that we have balanced positive and negative samples. We further split the 80% of the positive triangle set for training, 10% for validation and 10% for testing; likewise for the negative triangle set. ... it results in 200 trajectories and we use 180 of them for training. |
| Hardware Specification | Yes | All experiments for simplex predictions were run on a single NVIDIA A40 GPU with 48 GB of memory using CUDA 11.5. ... Here we report the number of parameters and the running time of SCCNN for 2-simplex prediction on one NVIDIA Quadro K2200 with 4GB memory |
| Software Dependencies | Yes | All experiments for simplex predictions were run on a single NVIDIA A40 GPU with 48 GB of memory using CUDA 11.5. ... We created a synthetic SC with 100 nodes, 241 edges and 135 triangles with the GUDHI toolbox Rouvreau (2015) |
| Experiment Setup | Yes | Hyperparameters 1) the number of layers: L ∈ {1, 2, 3, 4, 5}; 2) the number of intermediate and output features to be the same as F ∈ {16, 32}; 3) the convolution orders for SCCNNs are set to be the same, i.e., T d = Td = Tu = T u = T ∈ {1, 2, 3, 4, 5}. ... 4) the nonlinearity in the feature learning phase: Leaky ReLU with a negative slope 0.01; 5) MPSN is set as Bodnar et al. (2022); 6) the MLP in the prediction phase: two layers with a sigmoid nonlinearity. ... and, 7) the binary cross entropy loss and the adam optimizer with a learning rate of 0.001 are used; the number of the epochs is 1000 where an early stopping is used. |