Efficient Dynamic Graphs Learning with Refined Batch Parallel Training
Authors: Zhengzhao Feng, Rui Wang, Longjiao Zhang, Tongya Zheng, Ziqi Huang, Mingli Song
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate RBT s superior performance compared to existing MTGNN frameworks like TGL, ETC, and PRES in terms of training efficiency and accuracy across various dynamic graph datasets. |
| Researcher Affiliation | Academia | Zhengzhao Feng1 , Rui Wang1,4 , Longjiao Zhang1 , Tongya Zheng2,3 , Ziqi Huang1 , Mingli Song1,3,4 1Zhejiang University 2High-Performance Intelligent Computing Research Center for Ultra-Large Scale Graph Data, School of Computer and Computing Science, Hangzhou City University 3State Key Laboratory of Blockchain and Data Security, Zhejiang University 4Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security EMAIL, doujiang EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical equations and descriptive text, but it does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is made publicly available at https://github.com/fengwudi/RBT. |
| Open Datasets | Yes | We employ seven dynamic graph datasets: Wikipedia, Reddit, MOOC, Last FM (from JODIE [Kumar et al., 2019]), Flights (a flight traffic network [Poursafaei et al., 2022]), and user interaction data from Wiki Talk [Leskovec, 2023b] and Stack Overflow [Leskovec, 2023a]. |
| Dataset Splits | Yes | The data partitioning followed a 70%-15%-15% split for training, validation, and testing, respectively, in line with previous studies [da Xu et al., 2020; Rossi et al., 2020; Gao et al., 2024; Li et al., 2023]. |
| Hardware Specification | Yes | Experiments are executed on Ubuntu 22.04.3 LTS machine, utilizing an Intel Xeon Gold 6342 CPU @2.80GHz and a Nvidia A40 48GB GPU, equipped with 1TB of memory and 40TB of disk space. |
| Software Dependencies | No | The paper mentions 'Ubuntu 22.04.3 LTS' as the operating system, but does not provide specific version numbers for other key software components, such as programming languages or libraries. |
| Experiment Setup | Yes | The batch sizes are set to 600 and 1000 for testing, while other settings remain at their default values. Each dataset is trained for five epochs with five repetitions, and the mean of the final results is taken. |