Benchmarking Edge Regression on Temporal Networks
Authors: Muberra Ozmen, Florence Regol, Thomas Markovich
DMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we present four novel datasets that have been tailored to TER, as well as a variety of benchmark results for strong heuristic baselines and industry standard temporal and static graph neural networks. The remainder of this work is organized as follows. In Section 2 we present a review of the literature and related work on the topic. We present a clear elucidation of the problem statement in Section 3. In Section 4 we turn our attention to the four new datasets that we have constructed. We present the baseline results in Section 5, and finally we present our conclusions in Section 6. |
| Researcher Affiliation | Industry | Muberra Ozmen EMAIL Cash App Montreal, QC, Canada Florence Regol EMAIL Cash App Montreal, QC, Canada Thomas Markovich EMAIL Cash App Cambridge, MA, USA |
| Pseudocode | No | The paper describes prediction methods like Moving Average and Edge Similarity using mathematical formulas and conceptual steps, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block, nor is it formatted as a structured, code-like procedure. |
| Open Source Code | No | Processed versions of proposed datasets are accessible through this repository 1. Keywords: Temporal Edge Regression, Graph Representation Learning, Edge-wise Graph Learning, Temporal Graph Learning. Footnote 1: huggingface.co/cash-app-inc. Explanation: The paper mentions a repository for the processed datasets and states that models are implemented using PyTorch, PyTorch Geometric, and PyTorch Geometric Temporal libraries. However, it does not provide an explicit statement about releasing the source code for the methodology developed in *this* paper, nor does it provide a direct link to their implementation code. |
| Open Datasets | Yes | Processed versions of proposed datasets are accessible through this repository 1. Keywords: Temporal Edge Regression, Graph Representation Learning, Edge-wise Graph Learning, Temporal Graph Learning. Footnote 1: huggingface.co/cash-app-inc. The Bureau of Transportation Statistics, under the United States Department of Transportation, monitors and reports on the on-time performance of domestic flights operated by major airlines. The datasets for 2019 (Trivedi, 2021) and 2015 (of Transportation, 2017) are publicly available on Kaggle 7 to enable analyses of flight delays and airport performance. To investigate the impact of weather conditions on flight delays, we have supplemented the flight datasets with weather data from Open-Meteo 8, an open-source weather API (Zippenfenig, 2023). |
| Dataset Splits | Yes | In all our experiments, data is divided into training (70%), validation (10%) and testing (20%) sets chronologically. |
| Hardware Specification | Yes | All computations were run on an Nvidia DGX A100 machine with 128 AMD Rome 7742 cores and 8 Nvidia A100 GPUs. |
| Software Dependencies | No | All the models are implemented using Py Torch (Paszke et al., 2019), Py Torch Geometric (Fey and Lenssen, 2019) and Py Torch Geometric Temporal (Rozemberczki et al., 2021) libraries. All tuning was performed on the validation set, and we report the results on the test set that are associated with those hyperparameter settings. The tuned values for hyperparameters are provided in the Appendix B. We performed 100 steps of hyperparameters optimization to optimize the hyperparameters of all models using the software package Optuna (Akiba et al., 2019). Explanation: The paper lists several software packages and libraries used (PyTorch, PyTorch Geometric, PyTorch Geometric Temporal, Optuna) and cites their respective papers, but it does not specify concrete version numbers for these software components. |
| Experiment Setup | Yes | In our experimental setup, the dimensionality of the layers in fconv( ) is consistently set to ensure a final concatenation dimensionality of 600 before readout. The number of layers for all deep learning methods is set to 2. We conduct a grid search for the dropout probability, exploring values in [0, 0.1, 0.3, 0.5]. The readout function σ is chosen dataset-dependent, with the Sigmoid function employed for Epic Games and the Tanh function for the remaining datasets. The loss function is selected from among MAE, MSE, and Huber loss. We utilize the Adam optimizer, with the learning rate tuned from a uniform distribution between 0.0001 and 0.003 and weight decay selected from [0.0, 0.05, 0.1]. The learnig rate scheduler is set to reduce the learning rate by a factor of 0.1 per 10, 20, or 100 steps. The batch size is set to 512 and the maximum number of epochs is set to 300, with early stopping criteria defined as no improvement in validation loss for five consecutive steps. We performed 100 steps of hyperparameters optimization to optimize the hyperparameters of all models using the software package Optuna (Akiba et al., 2019). All tuning was performed on the validation set, and we report the results on the test set that are associated with those hyperparameter settings. The tuned values for hyperparameters are provided in the Appendix B. |