HHAN: Comprehensive Infectious Disease Source Tracing via Heterogeneous Hypergraph Neural Network

Authors: Qiang He, Yunting Bao, Hui Fang, Yuting Lin, Hao Sun

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three real-world datasets demonstrate that HHAN significantly outperforms other state-of-the-art methods in tackling the complex challenge of tracing infectious diseases in heterogeneous populations.
Researcher Affiliation Academia 1Northeastern University, Shenyang, China 2Research Institute for Interdisciplinary Sciences and Key Laboratory of Interdisciplinary Research of Computation and Economics, Shanghai University of Finance and Economics, China EMAIL, EMAIL, EMAIL, EMAIL,EMAIL
Pseudocode No The paper describes the HHAN model, its modules (Agent-Based Modeling Module and Heterogeneous Graph Neural Network Module), and experimental procedures using equations and descriptive text, but it does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the methodology described is publicly available.
Open Datasets Yes ACM Hypertext Conference Dataset: it was collected during the 2009 ACM Hypertext Conference, where the Socio Patterns project deployed the Live Social Semantics application. ... (Isella et al. 2011). School Dataset: it corresponds to the contact and friendship relationships among students at a high school in Marseille, France, measured using various techniques in December 2013 (Mastrandrea, Fournet, and Barrat 2015). Hospital Dataset: it contains the contact network between patients and healthcare workers (HCWs) within a hospital ward in Lyon, France, from 1:00 PM on December 6, 2010, to 2:00 PM on December 10, 2010, and 46 HCWs and 29 patients are included (Vanhems et al. 2013).
Dataset Splits Yes Each generated dataset is randomly divided into training, validation, and test sets in an 8:1:1 ratio.
Hardware Specification No The paper describes experimental settings such as learning rates, optimizers, dropout rates, batch sizes, and epochs, but does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using a 'GAT layer' and an 'Adam W optimizer' but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers that would be necessary to replicate the experiments.
Experiment Setup Yes The model is trained with a dynamically adjusted learning rate using a learning rate scheduler. A dropout rate of 0.4 is applied after each GAT layer to prevent overfitting, and the Adam W optimizer is used with a learning rate of 0.005 and a weight decay of 1 10 4. The learning rate scheduler reduces the learning rate by half if validation performance plateaus, with a minimum learning rate of 1 10 6. The batch size is set to 20, and training is conducted over 300 epochs to ensure sufficient learning and convergence of the model.