Revealing Concept Shift in Spatio-Temporal Graphs via State Learning

Authors: Kuo Yang, Yunhe Guo, Qihe Huang, Zhengyang Zhou, Yang Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we select seven datasets from different domains to validate the effectiveness of our model. By comparing the performance of different models on samples with concept shift, we verify that our Samen gains generalization capacity that existing methods fail to capture. The paper includes a dedicated "7 Experiment" section with subsections on "Experiment Setup", "Performance Analysis on Real-world Datasets", "Generalization Analysis", "Ablation Study", and "Efficiency Analysis", presenting quantitative results in tables (Table 1 and 2) and figures (Figure 4 and 5).
Researcher Affiliation Academia 1University of Science and Technology of China (USTC), Hefei, China 2Suzhou Institute for Advanced Research, USTC, Suzhou, China EMAIL, EMAIL. All listed affiliations are academic institutions (University of Science and Technology of China and Suzhou Institute for Advanced Research), and the email domains are academic (.edu.cn).
Pseudocode No The paper describes the methodology in narrative text and mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks, nor any structured step-by-step procedures formatted like code.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We employ seven cross-domain real-world dynamic graph datasets to evaluate our Samen. COLLAB [Tang et al., 2012]... Yelp [Sankar et al., 2020]... ACT [Kumar et al., 2019]... PEMS08 and PEMS04 [Song et al., 2020]... SD2019 and GBA-2019 [Liu et al., 2023]. The paper uses and cites several well-known public datasets, providing proper bibliographic references for each.
Dataset Splits No The paper describes the nature of the tasks (e.g., "predict the next 12 steps based on historical 12 steps observations") and the criteria for identifying samples with concept shift, but it does not specify explicit train/validation/test splits (e.g., percentages, sample counts) for the main datasets used in the experiments. It states "We filter samples exhibiting concept shift" but does not provide reproducible partitioning details for the entire datasets.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running its experiments.
Software Dependencies No The paper does not mention any specific software dependencies or library versions (e.g., Python, PyTorch, TensorFlow versions) used in the implementation or experimentation.
Experiment Setup No The paper describes the experimental evaluation, including datasets and baselines, but does not provide specific details regarding hyperparameters (e.g., learning rate, batch size, number of epochs) or other system-level training settings for the models.