DHAKR: Learning Deep Hierarchical Attention-Based Kernelized Representations for Graph Classification

Authors: Feifei Qian, Lu Bai, Lixin Cui, Ming Li, Ziyu Lyu, Hangyuan Du, Edwin Hancock

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate the effectiveness of the proposed DHAKR model. ... We evaluate the classification performance of the DHAKR model on standard graph datasets... Table 2: Classification accuracy (in % standard error) comparisons with graph kernels.
Researcher Affiliation Academia 1 School of Artificial Intelligence, and Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing Normal University, Beijing, China; 2 School of Information, Central University of Finance and Economics, Beijing, China; 3 Zhejiang Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, Jinhua, China; 4 Zhejiang Institute of Optoelectronics, Jinhua, China; 5 School of Cyber Science and Technology, Sun Yat-Sen University, Shenzhen, China; 6 School of Computer and Information Technology, Shanxi University, Taiyuan, China; 7 Department of Computer Science, University of York, York, United Kingdom.
Pseudocode No The paper describes the methodology using mathematical equations and textual explanations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The detailed descriptions of baselines and the implementation details are provided in the Arxiv version. (This does not explicitly state the code for their method is released, nor does it provide a direct link.)
Open Datasets Yes We evaluate the classification performance of the DHAKR model on standard graph datasets(Siddiqi et al. 1999; Morris et al. 2020) extracted from bioinformatics (Bio), social networks (SN), and computer vision (CV). The statistical information of the datasets is shown in Table 1.
Dataset Splits Yes To make a fair comparison, we perform a 10-fold cross-validation and repeat the experiments 10 times.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The detailed descriptions of baselines and the implementation details are provided in the Arxiv version. (The main text does not specify any software names with version numbers.)
Experiment Setup Yes For our proposed methods, the assignment ratio is 0.5. ... We vary the values of γ from 0.0001 to 1.0 and test the graph classification performance on four datasets as shown in Fig.4. ... To make a fair comparison, we perform a 10-fold cross-validation and repeat the experiments 10 times.