Graph Attention Retrospective

Authors: Kimon Fountoulakis, Amit Levi, Shenghao Yang, Aseem Baranwal, Aukosh Jagannath

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our theoretical results on synthetic and real-world data. ... We provide an extensive set of experiments both on synthetic data and on three popular real-world datasets that validates our theoretical results. ... In this section, we demonstrate empirically our results on synthetic and real data.
Researcher Affiliation Collaboration Kimon Fountoulakis EMAIL David R. Cheriton School of Computer Science University of Waterloo Waterloo, Ontario, Canada. Amit Levi EMAIL Huawei Noah s Ark Lab Montreal, Quebec, Canada.
Pseudocode No The paper describes methods and algorithms using mathematical notation and textual explanations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. It mentions using PyTorch Geometric and OGB datasets, but not their own implementation code.
Open Datasets Yes We use popular real-world graph datasets Cora, Pub Med, and Cite Seer collected by Py Torch Geometric (Fey and Lenssen, 2019) and ogbn-arxiv from Open Graph Benchmark (Hu et al., 2020).
Dataset Splits Yes For real datasets, we use the default splits which come from Py Torch Geometric (Fey and Lenssen, 2019) and OGB (Hu et al., 2020).
Hardware Specification No The paper discusses experiments on synthetic and real data but does not provide any specific hardware details such as CPU/GPU models, memory, or specific computing platforms used.
Software Dependencies No The paper mentions using PyTorch Geometric and Open Graph Benchmark for datasets and their methods (MLP-GAT, GAT, GCN), but does not specify version numbers for any software libraries, frameworks, or programming languages used for implementation.
Experiment Setup Yes We set n = 1000, d = n/ log2(n), p = 0.5 and σ = 0.1. Results are averaged over 10 trials. ... For the original GAT architecture we fix w = µ/ µ and define the first head as a1 = 1/2(1, 1) and b1 = 1/2w T µ; The second head is defined as a2 = a1 and b2 = b1. ... For MLP-GAT we use the ansatz Ψ = (1p q 1p<q)Ψ where Ψ is given in (3) and (4) with R = 1. ... In the easy regime we fix the mean µ to be a vector where each coordinate is equal to 10σ p d. ... In the hard regime we fix the mean µ to a vector where each coordinate is equal to σ/ d. ... We fix p = 0.5 and vary q from log2(n)/n to 1 log2(n)/n.