Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
LD2: Scalable Heterophilous Graph Neural Network with Decoupled Embeddings
Authors: Ningyi Liao, Siqiang Luo, Xiang Li, Jieming Shi
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to showcase that our model is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15 speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines. |
| Researcher Affiliation | Academia | Ningyi Liao Nanyang Technological University EMAIL Siqiang Luo Nanyang Technological University EMAIL Xiang Li East China Normal University EMAIL Jieming Shi Hong Kong Polytechnic University EMAIL |
| Pseudocode | Yes | Algorithm 1 A2Prop: Approximate Adjacency Propagation |
| Open Source Code | Yes | Our code is available at: https://github.com/gdmnl/LD2. |
| Open Datasets | Yes | We mainly perform experiments on million-scale and above heterophilous datasets [26, 55] for the transductive node classification task, with the largest available graph wiki (m = 243M) included. |
| Dataset Splits | Yes | We leverage settings as per [26] such as the random train/test splits and the induced subgraph testing for GSAINT-sampling models. |
| Hardware Specification | Yes | Evaluations are conducted on a machine with 192GB RAM, two 28-core Intel Xeon CPUs (2.2GHz), and an NVIDIA RTX A5000 GPU (24GB memory). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with versions like Python 3.8 or PyTorch 1.9). |
| Experiment Setup | No | while parameter settings, further experiments, and subsequent discussions can be found in the Appendix. |