MaskDGNN: Self-Supervised Dynamic Graph Neural Networks with Activeness-aware Temporal Masking
Authors: Yiming He, Xiang Li, Zhongying Zhao, Haobing Liu, Peilan He, Yanwei Yu
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on five real-world dynamic graph datasets demonstrate that Mask DGNN outperforms state-of-the-art methods, achieving an average improvement of 7.07% in accuracy and 13.87% in MRR for link prediction tasks. |
| Researcher Affiliation | Academia | 1Faculty of Information Science and Engineering, Ocean University of China 2Faculty of Computer Science and Engineering, Shandong University of Science and Technology EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and textual descriptions but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1https://github.com/heyimingheyiming/Mask DGNN |
| Open Datasets | Yes | In our experiments, we evaluate the performance of all methods using five publicly available real-world datasets: Bitcoin Alpha, Bitcoin-OTC [Kumar et al., 2016], UCI-Message [Panzarasa et al., 2009], MOOC [Kumar et al., 2019], and Wiki-Talk [De Rijke, 2017]. |
| Dataset Splits | No | The paper does not explicitly state specific training/validation/test split percentages or sample counts for the datasets used. While it implies a temporal split for dynamic graphs and uses a validation set for early stopping, it doesn't quantify these splits. |
| Hardware Specification | Yes | All experiments are performed on four RTX 3090 GPUs with 24GB of memory. |
| Software Dependencies | No | The paper mentions various architectural components and prior works (e.g., GCN, GRU, LSTM, Transformers) but does not provide specific version numbers for any software libraries, frameworks, or programming languages used for implementation. |
| Experiment Setup | Yes | In our model, the node embedding dimension d is set to 64, the mask ratio p ranges from 10% to 40% depending on the dataset, the node dynamics score ratio α is set to 0.7, and the window size w is determined based on the snapshot length, typically ranging from 4 to 8 for each dataset. The parameters λ, η, and γ are set to 1.0, 2.0, and -0.5, respectively. The learning rate for θ, denoted by τ, is 0.008, while the learning rate for Θ, denoted by β, is set to 0.01. The dropout rate is fixed at 0.1, and the Graph Convolutional Encoder consists of 2 layers. To prevent overfitting, we apply early stopping during validation with a patience of 10. |