GraphNeuralNetworks.jl: Deep Learning on Graphs with Julia

Authors: Carlo Lucibello, Aurora Rossi

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The following code demonstrates how to build a simple GNN for a regression task using a synthetic dataset. This illustrative example highlights the definition of the model, setup of the optimizer, creation of data loaders, and training of the model over multiple epochs. It also demonstrates the use of batching and GPU acceleration with CUDA for enhanced performance.
Researcher Affiliation Academia Carlo Lucibello EMAIL Bocconi University Department of Computing Sciences Bocconi Institute for Data Science and Analytics Milan, Italy Aurora Rossi EMAIL Université Côte d Azur, INRIA, CNRS, I3S Sophia Antipolis, France
Pseudocode No The paper provides Julia code snippets demonstrating implementation details and a full training example, such as 'm = apply_edges(message, g, xi, xj, eji)' and a multi-line code block for a training loop. However, it does not contain explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Graph Neural Networks.jl is an open-source framework for deep learning on graphs, written in the Julia programming language. [...] The package is available on Git Hub: https://github.com/Julia Graphs/ Graph Neural Networks.jl.
Open Datasets No The training example provided uses a synthetic dataset generated with 'rand_graph': 'make_graph() = rand_graph(n, m, ndata = (; x = randn(Float32, 16, n)), gdata=(; y = randn(Float32)))'. While the paper mentions that the package 'integrates seamlessly with real-world graph datasets available through MLDatasets.jl', the experiments demonstrated in the paper's example do not use a specific publicly accessible dataset with concrete access information.
Dataset Splits No The provided training example initializes a 'train_data' array and uses 'DataLoader(train_data, batchsize=32, shuffle=true, collate=true)' without specifying explicit training, validation, or testing splits or their proportions.
Hardware Specification No The paper mentions support for 'multiple GPU backends' and 'GPU acceleration (CUDA and AMDGPU at the time of writing)', but it does not specify exact GPU models (e.g., NVIDIA A100, RTX 3080) or other detailed hardware specifications used for the experiments.
Software Dependencies No The paper mentions several software dependencies and frameworks such as 'Graph Neural Networks.jl', 'Flux', 'Statistics', 'MLUtils', 'CUDA', and 'Adam' but does not provide specific version numbers for these components, which are necessary for reproducible descriptions.
Experiment Setup Yes The training example explicitly defines the model architecture: 'model = GNNChain(GCNConv(16 => 64), BatchNorm(64), x -> relu.(x), GCNConv(64 => 64, relu), GlobalPool(mean), Dense(64, 1))'. It also specifies the optimizer and learning rate as 'Adam(1f-4)' and the batch size as 'batchsize=32' for the DataLoader.