Deep Hypergraph Neural Networks with Tight Framelets

Authors: Ming Li, Yujie Fang, Yi Wang, Han Feng, Yongchun Gu, Lu Bai, Pietro Liò

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiment results on diverse benchmark datasets demonstrate that Frame HGNN outperforms several state-of-the-art models, effectively reducing oversmoothing while improving predictive accuracy. Our experiments conducted on eight benchmark datasets demonstrate that Frame HGNN outperforms several baselines and shows significant potential in preventing oversmoothing. This section presents a comprehensive evaluation of Frame HGNN on the task of node classification using eight benchmark datasets. The performance of Frame HGNN is compared against several classic HGNN models.
Researcher Affiliation Academia 1Zhejiang Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University 2Zhejiang Institute of Optoelectronics 3School of Computer Science and Technology, Zhejiang Normal University 4Department of Mathematics, City University of Hong Kong 5School of Artificial Intelligence, and Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing Normal University 6Department of Computer Science and Technology, Cambridge University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations (Eq. 9 and 10) and a schematic overview (Figure 2), but does not include an explicit 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper provides a link to an Appendix PDF (https://mingli-ai.github.io/Frame HGNN.pdf) but does not contain an explicit statement about releasing source code for the methodology or a direct link to a code repository.
Open Datasets Yes We conducted our experiments using eight publicly available datasets: Cora, Cite Seer, Pubmed, and Cora-CA (co-citation and coauthorship networks) obtained from (Yadati et al. 2019), as well as Senate (Fowler 2006), House (Chodrow, Veldt, and Benson 2021), NTU2012 (Chen et al. 2003), and Model Net40 (Wu et al. 2015), which cover a variety of applications. Detailed descriptions of these datasets are provided in the Appendix B.
Dataset Splits Yes The datasets were randomly divided into training, validation, and test sets, with proportions of 50%, 25%, and 25%, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, used to replicate the experiments.
Experiment Setup Yes To prevent overfitting, an early stopping strategy was employed, halting training if no improvement in the validation performance was observed for 200 consecutive epochs, with a maximum of 1000 training epochs. Frame HGNN model includes three key hyperparameters: α, γ, and λ, where α controls the balance between current and initial layer features, γ balances features derived from random filtering and graph convolution, and λ adjusts the weight of each layer s feature updates relative to the initial features through the scaling parameter θ. We perform experiments on Cora and House, with the results illustrated in Figure 4. The values of α, γ, and λ were varied within the range{0.1, 0.2, 0.3, ..., 0.9}.