Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Adaptive Diffusion in Graph Neural Networks

Authors: Jialin Zhao, Yuxiao Dong, Ming Ding, Evgeny Kharlamov, Jie Tang

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By directly plugging ADC into existing GNNs, we observe consistent and significant outperformance over both GDC and their vanilla versions across various datasets, demonstrating the improved model capacity brought by automatically learning unique neighborhood size per layer and per channel in GNNs.
Researcher Affiliation Collaboration Jialin Zhao Tsinghua University EMAIL Yuxiao Dong Microsoft Research EMAIL Ming Ding Tsinghua University EMAIL Evgeny Kharlamov Bosch Center for Artificial Intelligence EMAIL Jie Tang Tsinghua University EMAIL
Pseudocode No The paper includes mathematical equations and descriptions of the approach but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes *Code is available at https://github.com/abcbdf/ADC
Open Datasets Yes We use widely-adopted datasets including CORA, Cite Seer [28], Pub Med [25], Coauthor CS, Amazon Computers and Amazon Photo [29].
Dataset Splits Yes The data is split to a development and test set. ... The development set is split to a training set containing 20 nodes for each class and a validation set with remaining nodes.
Hardware Specification No The paper states 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See appendix.' but the specific hardware details are not provided in the main text.
Software Dependencies No The paper does not provide specific version numbers for ancillary software dependencies.
Experiment Setup Yes We set the learning rate of t equals to the learning rate of other parameters, which is 0.01. ... The expansion step (K in Eq.15) is set to 10. We use early stopping with patience of 100 epochs.