Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Coevolutionary Latent Feature Processes for Continuous-Time User-Item Interactions

Authors: Yichen Wang, Nan Du, Rakshit Trivedi, Le Song

NeurIPS 2016 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on diverse real-world datasets demonstrate significant improvements in user behavior prediction compared to state-of-the-arts. We evaluate our framework, COEVOLVE, on synthetic and real-world datasets.
Researcher Affiliation Collaboration Google Research College of Computing, Georgia Institute of Technology EMAIL, EMAIL EMAIL
Pseudocode No The paper states: "We provide details in the appendix" for the algorithm, but the appendix is not included in the provided text. Therefore, pseudocode or algorithm blocks are not visible in the given content.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. There are no specific repository links or explicit code release statements.
Open Datasets Yes Our datasets are obtained from three different domains from the TV streaming services (IPTV), the commercial review website (Yelp) and the online media services (Reddit). Yelp is available from Yelp Dataset challenge Round 7.
Dataset Splits Yes We use all the events up to time T p as the training data, and the rest as testing data, where T is the length of the observation window. We tune hyper-parameters and the latent rank of other baselines using 10-fold cross validation with grid search. We vary the proportion p 2 {0.7, 0.72, 0.74, 0.76, 0.78} and report the averaged results over five runs
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It does not mention any specific hardware.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers. No programming languages or libraries with their versions are mentioned.
Experiment Setup Yes We tune hyper-parameters and the latent rank of other baselines using 10-fold cross validation with grid search. We vary the proportion p 2 {0.7, 0.72, 0.74, 0.76, 0.78} and report the averaged results over five runs