Machines and Mathematical Mutations: Using GNNs to Characterize Quiver Mutation Classes

Authors: Jesse He, Helen Jenne, Herman Chau, Davis Brown, Mark Raugas, Sara C. Billey, Henry Kvinge

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we use graph neural networks to investigate quiver mutation... We train a graph neural network (GNN) on a dataset consisting of 70, 000 quivers labeled with one of six different types... We find that not only does the resulting model achieve high accuracy, it also extracts features from type D quivers that align with the characterization from (Vatne, 2010).
Researcher Affiliation Academia 1Halıcıo glu Data Science Institute, University of California San Diego, San Diego, CA, USA 2Pacific Northwest National Laboratory, Richland, WA, USA 3Department of Mathematics, University of Washington, Seattle, WA, USA.
Pseudocode No The paper describes the methods used for training the GNN and the explainability techniques, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a link to its own source code for the methodology described. It mentions using PyTorch Geometric, which is a third-party library.
Open Datasets No The paper states, 'We generate data with Sage (The Sage Developers, 2023; Musiker & Stump, 2011), which we describe in greater detail in Appendix C.' This indicates they generated their own dataset using a software tool, but does not provide concrete access information for the specific dataset used in their experiments.
Dataset Splits Yes The training data consists of quivers of each type on 6, 7, 8, 9, and 10 nodes. The test set consists of quivers of types A, D, E, A, D on 11 nodes. (Type E is not defined on 11 nodes.) ... Table 2. Number of quivers of each type and size in train and test sets.
Hardware Specification Yes We train with the Adam optimizer for 50 epochs with a batch size of 32 using cross-entropy loss with L1 regularization (γ = 5 10 6) using an Nvidia RTX A2000 Laptop GPU.
Software Dependencies Yes Quivers were generated using Sage (The Sage Developers, 2023; Musiker & Stump, 2011). For training and inference, each quiver was converted to Py Torch Geometric (Fey & Lenssen, 2019).
Experiment Setup Yes We train a 4-layer Dir GINE GNN with a hidden layer width of 32 to classify quivers into types A, D, E, A, D, and E... We train with the Adam optimizer for 50 epochs with a batch size of 32 using cross-entropy loss with L1 regularization (γ = 5 10 6)... In our analysis, we use hyperparameter values α = 2.5 and β = 0.1.