Flow-based Time-aware Causal Structure Learning for Sequential Recommendation
Authors: Hangtong Xu, Yuanbo Xu, Huayuan Liu, En Wang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate FCSRec on manifold real-world datasets, and experimental results show that FCSRec outperforms several state-of-the-art methods in recommendation performance. Our code is available at Code-link. |
| Researcher Affiliation | Academia | MIC Lab, College of Computer Science and Technology, Jilin University EMAIL, EMAIL |
| Pseudocode | No | The paper describes its methodology using mathematical formulations and natural language explanations (e.g., equations 1-16), but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at Code-link. The proof of Theorem 1 can be found in in the code link . Due to the page limit, the appendix is available in our public code repository at https://github.com/MICLab-Rec/FCSRec. |
| Open Datasets | Yes | To comprehensively and fairly evaluate the models effectiveness, we conducted experiments using nine publicly available datasets encompassing a variety of recommendation scenarios (such as movies and pois) and different densities. We select five datasets of varying sizes ranging from 100k to 10M: Beauty, ML-100K, NYC, TKY, ML-1M, Gowalla and ML-10M to evaluate the robustness of the model to the dataset size. |
| Dataset Splits | Yes | Towards the data partition, we select each user s last previously un-interacted item as the target during the recommendation procedure and all the prior items for training. |
| Hardware Specification | No | The paper details software implementations and training configurations but does not provide specific hardware details (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | We implement FCSRec and baselines in Py Torch. Our implementation of the baselines is based on the original papers or the open-source codebase Recbole [Zhao et al., 2021]. |
| Experiment Setup | Yes | All models are trained with the Adam optimizer with early stopping at patience = 10. We set the learning rate to 1e-3 and the l2-regularization weight to 1e-6. For FCSRec, we tune the hyper-parameter concepts k in the range of [1, 8] for different datasets. |