Future Sight and Tough Fights: Revolutionizing Sequential Recommendation with FENRec

Authors: Yu-Hsuan Huang, Ling Lo, Hongxia Xie, Hong-Han Shuai, Wen-Huang Cheng

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results demonstrate our state-of-the-art performance across four benchmark datasets, with an average improvement of 6.16% across all metrics.
Researcher Affiliation Academia 1National Yang Ming Chiao Tung University, 2Jilin University, 3National Taiwan University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using mathematical formulations and descriptive text, but does not include a distinct pseudocode or algorithm block.
Open Source Code No The paper does not provide any concrete statement about the availability of source code, nor does it include a link to a code repository.
Open Datasets Yes We use the Amazon dataset with three categories: Sports, Beauty, Toys, and the Yelp dataset.
Dataset Splits No The paper describes how subsequences are constructed from user interaction sequences for training, but it does not provide specific train/test/validation split percentages or sample counts for the datasets used in the experiments.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions various hyperparameters and experimental settings but does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes Parameter tuning is meticulously carried out; τ2 is varied within the set {8, 10}, while τ1 was fixed at 1, µ at 0.1, and m at 0.2. The parameters γ and λ are each tuned over {0.1, 0.2, 0.3, 0.4, 0.5}. We incorporate enduring hard negatives into the training process after a 20-epoch warm-up period. All experiments were conducted three times, and results were averaged for comparison.