Flow Matching Based Sequential Recommender Model

Authors: Feng Liu, Lixin Zou, Xiangyu Zhao, Min Tang, Liming Dong, Dan Luo, Xiangyang Luo, Chenliang Li

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations on four benchmark datasets reveal that FMREC achieves an average improvement of 6.53% over state-of-the-art methods. The replication code is available at https: //github.com/Feng Liu-1/FMRec. 5 Experiment This section presents comprehensive experiments to demonstrate the effectiveness of FMREC.
Researcher Affiliation Academia 1Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University 2City University of Hong Kong, 3Monash University, 4National Defense University, 5Lehigh University 6State Key Lab of Mathematical Engineering and Advanced Computing
Pseudocode No The training procedure of FMREC is presented in Appendix A. The inference procedure of FMREC is presented in Appendix A. The provided paper text does not include Appendix A, so no pseudocode is available in the given content.
Open Source Code Yes The replication code is available at https: //github.com/Feng Liu-1/FMRec.
Open Datasets Yes Dataset We evaluate FMREC s effectiveness using four widely recognized publicly available datasets: (1) Amazon Beauty [Ni et al., 2019]... (2) Steam... (3) Movielens-100k [Harper and Konstan, 2015]... and (4) Yelp.
Dataset Splits Yes Following the procedures in [Sun et al., 2019; Li et al., 2023], we split user interaction sequences into three parts: the first m 2 sequences formed the training set, while im 1 and im served as targets for the validation and test sets, respectively.
Hardware Specification Yes All experiments are conducted on a server with two Intel XEON 6271C processors, 256 GB of memory, and four NVIDIA RTX 3090 Ti GPUs.
Software Dependencies No The provided paper text describes hyperparameter settings and model architecture details but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Hyperparameters include a batch size of 512, a learning rate of 0.001, and a maximum user interaction sequence length of 50. The loss weighting parameters α and β are set to 0.2 and 0.4, respectively. The scaling parameter s in the timestep schedule is set to 1.0. Besides, we use 30 Euler integration steps for generation.