Signed Graph Neural Networks: A Frequency Perspective
Authors: Rahul Singh, Yongxin Chen
TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our methods for node classification and link sign prediction tasks on signed graphs and achieve state-of-the-art performances. 6 Experiments We evaluate our proposed methods for node classification and link sign predictions tasks on signed networks. |
| Researcher Affiliation | Academia | Rahul Singh EMAIL Machine Learning Center Georgia Institute of Technology, Atlanta Yongxin Chen EMAIL School of Aerospace Engineering Georgia Institute of Technology, Atlanta |
| Pseudocode | No | The paper describes methods using mathematical formulations and textual explanations of steps, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | We used Deep Graph Library (DGL) (Wang et al., 2019) for implementation of our methods. We also utilized Py Torch Geometric Signed Directed (He et al., 2022c) for implementing existing signed GNN baselines for node classification tasks. The code is available at https://github.com/rahulsinghchandraul/Spectral_Signed_GNN. |
| Open Datasets | Yes | We perform node classification task on three datasets: Wiki-Editor, Wiki-Election, and Wiki-Rf A. Wiki-Editor is extracted from the UMDWikipedia dataset (Kumar et al., 2015). ... We use dataset extraction code provided by Mercado et al. (2019) 1. For link sign prediction, we use three additional datasets: Bitcoin-Alpha, Bitcoin-OTC, and Slashdot 2. Bitcoin-Alpha and Bitcoin-OTC (Kumar et al., 2016; 2018)... Slashdot dataset (Kunegis et al., 2009). |
| Dataset Splits | Yes | For all the three datasets, we use three different ratios for training (known labels): 1%, 2%, 5% of the total nodes. Out of the remaining nodes, we use 90% for testing, and use rest of the nodes for validation. ... We use 80% of the links for training and rest 20% for testing. |
| Hardware Specification | Yes | All the experiments were run on Intel Core i9-9900 machine equipped with NVIDIA Ge Force RTX 2080 Ti GPU. |
| Software Dependencies | No | We used Deep Graph Library (DGL) (Wang et al., 2019) for implementation of our methods. We also utilized Py Torch Geometric Signed Directed (He et al., 2022c) for implementing existing signed GNN baselines for node classification tasks. The paper mentions software tools but does not provide specific version numbers for them. |
| Experiment Setup | Yes | For fair comparison, we use two layer networks with hidden dimension of 64 for all the GNN-based methods. Binary cross-entropy loss based on the known labels is used as a loss function. We use Re LU as the non-linearity function in between the layers. Adam is used as the optimizer along with ℓ2-regularization to avoid overfitting. We tune the learning rate and weight decay (ℓ2-regularization) hyperparameters over validation data using a grid search. For Signed-Mag Net implementation, we fix q = 0.125 for all the experiments. ... The experiments were run for 300 epochs... For all of our methods we use feature dropout with a rate of 0.5. For Spectral-SGCN-II, we use attention and feature dropout with dropout rate of 0.5. We tune the learning rate with different values (on log scale) in the range [1e 3, 1e 1] and regularization rate in the range [1e 6, 1e 3]. |