Cost-Effective On-Device Sequential Recommendation with Spiking Neural Networks

Authors: Di Yu, Changze Lv, Xin Du, Linshan Jiang, Qing Yin, Wentao Tong, Xiaoqing Zheng, Shuiguang Deng

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate the superiority of SSR. Compared to other SR baselines, SSR achieves comparable recommendation performance while reducing energy consumption by an average of 59.43%.
Researcher Affiliation Collaboration 1Zhejiang University 2Fudan University 3National University of Singapore 4JD.com EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Model Inference of SSR. Input: Input sequences x, top value k Output: Top-k recommendation list Pk 1: Query spike-wise representation X with x. 2: Encoding X to H. 3: for all filter block l {1, ..., L} do 4: Learnable 1D-DFT: Hl = F(Hl) Wl. 5: 1D-IDFT: Fl = F 1( Hl). 6: Convert Fl to spikes with SN( ). 7: end for 8: Densify: HL Linear( FL). 9: Computing preference scores P with HL and X . 10: Sort P in a descending order. 11: Cut out top-k items from P to form Pk. 12: return Pk
Open Source Code Yes 1https://github.com/AmazingDD/serenRec
Open Datasets Yes Five public datasets collected from various platforms are selected to evaluate the efficacy of SSR. Their corresponding statistics are summarized in Table 1, sorted by density. Following [Zhou et al., 2022], we group the records by users or sessions, sort them by time in ascending order, and adopt a 5-core strategy for all datasets to ensure each user and item has at least five interaction records.
Dataset Splits Yes We use the time-aware and user-level split-by-ratio strategy [Sun et al., 2022] to split the whole dataset, where the last 20% of the total item sequences is the test set.
Hardware Specification Yes We assume running SSR on a 45nm neuromorphic hardware [Horowitz, 2014] and other baselines on GPUs, since SNNs can demonstrate low computing energy costs when deployed on neuromorphic chips, and GPUs are the most suitable platform for executing ANNs.
Software Dependencies No The paper mentions "Torch" as an implementation tool but does not provide a specific version number. No other specific software dependencies with versions are listed.
Experiment Setup Yes By default, we implement all models with Torch and use Adam optimizer with the 10 3 learning rate. The embedding dimension for each item is 64, and the time step size T for LIF neurons is 4.