Diffusion Guided Propagation Augmentation for Popularity Prediction

Authors: Chaozhuo Li, Tianqi Yang, Litian Zhang, Xi Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets from Twitter, Weibo, and APS demonstrate that DGPA outperforms state-of-the-art methods in early-stage popularity prediction. ... In this section, we perform experiments on three datasets to assess the efficacy of our approach. ... 4.1 Experimental Setup ... 4.2 Overall Performance ... 4.3 Sensitivity to Observation Time ... 4.4 Ablation Study ... 4.5 Hyperparameter Sensitivity Analysis
Researcher Affiliation Academia All authors are affiliated with 'Key Laboratory of Trustworthy Distributed Computing and Service (Mo E), Beijing University of Posts and Telecommunications, China'. The email domains 'bupt.edu.cn' and 'buaa.edu.cn' are associated with academic institutions.
Pseudocode Yes Algorithm 1 Two-stage Training
Open Source Code No The paper does not provide an explicit statement or a direct link to any open-source code repository for the methodology described.
Open Datasets Yes We use three datasets, frequently employed in information propagation studies, derive from social media platforms and academic citation networks: Twitter [Weng et al., 2013]dataset captures information cascades... Weibo [Cao et al., 2017]dataset includes information cascades... APS dataset comprises information cascades...
Dataset Splits Yes Following the approach of CTCP[Lu et al., 2023], we randomly select 70%, 15%, and 15% of the cascades for training, validation, and testing, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The paper discusses evaluation metrics, baselines, and general experimental setup details for datasets, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) used for training the model.