ENAHPool: The Edge-Node Attention-based Hierarchical Pooling for Graph Neural Networks
Authors: Zhehan Zhao, Lu Bai, Lixin Cui, Ming Li, Ziyu Lyu, Lixiang Xu, Yue Wang, Edwin Hancock
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the effectiveness of the proposed method." and "We empirically compare the proposed method with other deep learning approaches for graph classification across eight benchmark datasets: D&D (Dobson & Doig, 2003), PROTEINS (Borgwardt et al., 2005), NCI1 (Wale et al., 2008), FRANKENSTEIN (Orsini et al., 2015), IMDB-B, IMDB-M, COLLAB, and REDDIT-B (Yanardag & Vishwanathan, 2015). Detailed statistics for these datasets are provided in Table 1. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Beijing Normal University, Beijing, China. 2School of Information, Central University of Finance and Economics, Beijing, China. 3Zhejiang Institute of Optoelectronics, Jinhua, China. 4Zhejiang Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, Jinhua, China. 5School of Cyber Science and Technology, Sun Yat-Sen University, Shenzhen, China. 6School of Artificial Intelligence, Hefei Institute of Technology, Hefei, China. 7Department of Computer Science, University of York, York, United Kingdom. Correspondence to: Lu Bai <EMAIL>. |
| Pseudocode | No | The paper describes the proposed methods and architecture mathematically and with figures (e.g., Figure 1, Figure 3, Figure 4, Figure 5), but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We empirically compare the proposed method with other deep learning approaches for graph classification across eight benchmark datasets: D&D (Dobson & Doig, 2003), PROTEINS (Borgwardt et al., 2005), NCI1 (Wale et al., 2008), FRANKENSTEIN (Orsini et al., 2015), IMDB-B, IMDB-M, COLLAB, and REDDIT-B (Yanardag & Vishwanathan, 2015). |
| Dataset Splits | Yes | In our experiments, we employ 10-fold cross-validation for evaluation and report the average accuracy along with the standard deviation over 10 runs. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU/CPU models, processor types, or memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions that 'the backbone of MPNNs is GCN' but does not provide specific version numbers for any programming languages, libraries, or frameworks used in the implementation. |
| Experiment Setup | Yes | For the proposed model, we perform hyperparameter tuning using a grid search strategy, as detailed in Table 3. Table 3 lists the 'Hyperparameter Range' for 'Pooling ratio' (0.125, 0.25, 0.5), 'Pooling layer' (1, 2, 3), and 'MPNN layer' (3, 4, 5, 6, 7). |