Fuzzy Collaborative Reasoning

Authors: Huanhuan Yuan, Pengpeng Zhao, Jiaqing Fan, Junhua Fang, Guanfeng Liu, Victor S. Sheng

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on publicly available datasets demonstrate the superiority of this method in solving the sequential recommendation task. [...] Experiments In this section, we provide the details about experimental settings and results. [...] Overall Performance The experimental results on three public datasets are shown in Table 2.
Researcher Affiliation Academia Huanhuan Yuan1,2, Pengpeng Zhao1*, Jiaqing Fan1, Junhua Fang1, Guanfeng Liu2, Victor S. Sheng3 1Soochow University, 2Macquarie University, 3Texas Tech University EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods and processes verbally and with a computation graph (Figure 1), but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes ML100k (Harper and Konstan 2016): Movie Lens is one of the most widely used datasets for recommendation. It includes 100,000 ratings provided by 943 users. Amazon (He and Mc Auley 2016): The Amazon dataset collection includes various e-commerce datasets crawled from the Amazon website. In our experiment, we select two relatively sparse subsets, Beauty and Sports, from the Amazon dataset for a comparative analysis.
Dataset Splits Yes The last two positive interactions for each user are assigned to the validation set and test set, respectively, with the remaining historical interactions used for training.
Hardware Specification Yes We conduct our implementation of all methods by using Py Torch (Paszke et al. 2017) with the Adam optimizer (Kingma and Ba 2015) on a 32GB Tesla V100-PCIE GPU.
Software Dependencies No The paper mentions "Py Torch (Paszke et al. 2017)" and "Adam optimizer (Kingma and Ba 2015)" but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes All models are trained for 100 epochs with a learning rate of 0.001, applying early stopping after 20 epochs. The L2 regularization weight λ is set to 0.0001. Unless specified otherwise, the sequence length is configured to 5. We use a batch size of 256 and set the embedding dimension to 64. The coefficient γcoff is adjusted to 15 for the Sports dataset and 25 for the other two datasets.