HyperMagNet: A Magnetic Laplacian based Hypergraph Neural Network

Authors: Tatyana Benko, Martin Buck, Ilya Amburg, Stephen J. Young, Sinan Guven Aksoy

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study Hyper Mag Net for the task of node classification, and demonstrate its effectiveness over graph-reduction based hypergraph neural networks. In Section 5 are experimental results in the tasks of node classification on a variety of hypergraph structured data sets, where performance is compared against a variety of machine learning models based on traditional graph-based representations. Table 1: Left: Hypergraph sizes for 20 Newsgroups data in chosen subsets G1 through G4. Right: Average node classification accuracy on 20 Newsgroups.
Researcher Affiliation Academia Tatyana Benko EMAIL Martin Buck EMAIL University of Oregon Tufts University Ilya Amburg EMAIL Pacific Northwest National Laboratory Stephen J. Young EMAIL Pacific Northwest National Laboratory Sinan G. Aksoy EMAIL Pacific Northwest National Laboratory
Pseudocode No No explicit pseudocode or algorithm blocks are provided. The methodology is described in paragraph form.
Open Source Code No Please direct all inquiries about code availability to Sinan Aksoy.
Open Datasets Yes The 20 Newsgroups data set consists of approximately 18,000 message-board documents categorized according to topic. The Cora Citation data set (Mc Callum et al., 2000) consists of citations between machine learning papers classified into seven categories of topics with the Bo W representation for each paper. We also run Hyper Mag Net on the Cora Author (Mc Callum et al., 2000) data set. Princeton Model Net40 (Wu et al., 2015) and the National Taiwan University (NTU) (Chen et al., 2003) are two popular data sets within the computer vision community to test object classification.
Dataset Splits Yes The average classification accuracy over ten random 80%/20% train-test splits for each of the subsets G1 through G4 is recorded in Table 1.
Hardware Specification No No specific hardware details (like GPU/CPU models or memory) are mentioned for the experimental setup. The paper only discusses general computational costs and runtime.
Software Dependencies No For training the Adam optimizer was used to minimize the cross-entropy loss a the learning rate of 0.001 and weight decay at 0.0005. These settings were used for training across data sets.
Experiment Setup Yes Both HGNN ands Hyper Mag Net are two layer neural networks that follow standard hyperparameter settings based on that in Kipf & Welling (2016). The dimension of hidden layers set to 128 with Re LU activation functions. For training the Adam optimizer was used to minimize the cross-entropy loss a the learning rate of 0.001 and weight decay at 0.0005. These settings were used for training across data sets.