Self-Explainable Graph Transformer for Link Sign Prediction
Authors: Lu Li, Jiale Liu, Xingyu Ji, Maojun Wang, Zeyu Zhang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted experiments on several real-world datasets to validate the effectiveness of SE-SGformer, which outperforms the state-of-the-art methods by improving 2.2% prediction accuracy and 73.1% explainablity accuracy in the best-case scenario. We conduct extensive experiments on several real-world datasets and prove that SE-SGformer achieve performance comparable to state-of-the-art models and achieve good explanatory accuracy. |
| Researcher Affiliation | Academia | National Key Laboratory of Crop Genetic Improvement, Hubei Hongshan Laboratory, Huazhong Agricultural University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Training Algorithm of SE-SGformer |
| Open Source Code | Yes | Code https://github.com/liule66/SE-SGformer |
| Open Datasets | Yes | Our experimental datasets include Bitcoin-OTC, Bitcoin Alpha, Wiki Elec, Wiki Rfa, Epinions, Kuai Rand, Kuai Rec and Amazon-music, with baseline methods being GCN (Kipf and Welling 2016), GAT (Veliˇckovi c et al. 2017), SGCN (Derr, Ma, and Tang 2018), SNEA (Li et al. 2020), SGCL (Shu et al. 2021), and SIGFormer (Chen et al. 2024). |
| Dataset Splits | No | The paper mentions that "All datasets were experimented with five times" and that "The specific process of generation is detailed in Appendix," but the main text does not provide specific details on how the datasets are split into training, validation, and test sets. The Appendix is not available for review. |
| Hardware Specification | Yes | All experiments were conducted on a 64-bit machine equipped with two NVIDIA GPUs (NVIDIA L20, 1440 MHz, 48 GB memory). |
| Software Dependencies | No | The paper mentions using the "Adam optimizer" but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the implementation. |
| Experiment Setup | Yes | Specifically, we set the hidden embedding dimension d to 128, the learning rate to 1 10 3, the weight decay to 5 10 4, and the number of Transformer layers to L = 1. For the discriminator, we choose K = 40 and the number of randomly sampled neighbors m = 200. We searched for the optimal L in the range [1, 4] with a step size of 1, d in the range [16, 32, 64, 128], and max degree in the range [6, 8, 10, 12, 14]. |