Between Linear and Sinusoidal: Rethinking the Time Encoder in Dynamic Graph Learning
Authors: Hsing-Huan Chung, Shravan S Chaudhari, Xing Han, Yoav Wald, Suchi Saria, Joydeep Ghosh
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on six dynamic graph datasets, we demonstrate that the linear time encoder improves the performance of TGAT and Dy GFormer in most cases. |
| Researcher Affiliation | Academia | Hsing-Huan Chung EMAIL Department of Electrical and Computer Engineering University of Texas at Austin Shravan Chaudhari EMAIL Department of Computer Science Johns Hopkins University Xing Han EMAIL Department of Computer Science Johns Hopkins University Yoav Wald EMAIL Center for Data Science New York University Suchi Saria EMAIL Department of Computer Science Johns Hopkins University Joydeep Ghosh EMAIL Department of Electrical and Computer Engineering University of Texas at Austin |
| Pseudocode | No | The paper provides detailed mathematical formulations for TGAT and Dy GFormer models in Appendix A, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The experimental code is available at: https://github.com/hsinghuan/dg-linear-time.git. |
| Open Datasets | Yes | For our main experiments, we use six standard dynamic graph learning benchmark datasets: UCI (Panzarasa et al., 2009), Wikipedia (Kumar et al., 2019), Enron (Shetty & Adibi, 2004), Reddit (Kumar et al., 2019), Last FM (Kumar et al., 2019) and US Legis (Fowler, 2006; Huang et al., 2020). |
| Dataset Splits | Yes | Following the setup of the unified dynamic graph learning library, Dy GLib (Yu et al., 2023), we split the time span of an entire dataset into 70%/15%/15% for train/validation/test. |
| Hardware Specification | No | The paper mentions "peak GPU memory usage (in GB) across datasets and model variants" in Table 8, but does not specify the model or type of GPU, CPU, or any other specific hardware component used for the experiments. |
| Software Dependencies | No | The paper mentions the "Adam optimizer (Kingma, 2014)" and "Dy GLib (Yu et al., 2023)" but does not specify version numbers for these or any other software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | We follow Dy GLib to use the average precision (AP) as the evaluation metric, set the batch size to 200, and use the Adam optimizer (Kingma, 2014) with a learning rate of 0.0001. The sinusoidal time encoding dimension d T is set to 100. We also set the linear time encoding dimension to 100 for TGAT but set it to 1 for Dy GFormer variants...The search space for TGAT is the dropout rate among {0.1, 0.3, 0.5}. The search space for Dy GFormer variants is the combination of channel dimension dch {30, 50} and the dropout rate among {0.1, 0.3, 0.5}. |