Coactive Learning

Authors: Pannaga Shivaswamy, Thorsten Joachims

JAIR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An extensive empirical study demonstrates the applicability of our model and algorithms on a movie recommendation task, as well as ranking for web search.
Researcher Affiliation Collaboration Pannaga Shivaswamy EMAIL Linked In Corporation Thorsten Joachims EMAIL Department of Computer Science Cornell University
Pseudocode Yes Algorithm 1 Preference Perceptron. Algorithm 2 Batch Preference Perceptron. Algorithm 3 Generic Template for Coactive Learning Algorithms Algorithm 4 Exponentiated Preference Perceptron Algorithm 5 Convex Preference Perceptron. Algorithm 6 Second-order Preference Perceptron.
Open Source Code No No explicit statement about providing source code or a link to a repository is found in the paper.
Open Datasets Yes Our first dataset is a publicly available dataset from Yahoo! (Chapelle & Chang, 2011) for learning to rank in web-search. We used the Movie Lens dataset from grouplense.org which consists of a million ratings over 3900 movies as rated by 6040 users.
Dataset Splits Yes We randomly divided users into two equally sized sets. The first set was used to obtain a feature vector xj for each movie j using the SVD embedding method for collaborative filtering (see Bell & Koren, 2007, Eqn. (15)). For the second set of users, we then considered the problem of recommending movies... After there were more than 50 pairs in the training set, the C value was obtained via five-fold cross-validation.
Hardware Specification No No specific hardware details (like CPU, GPU models, or memory) used for running the experiments are mentioned in the paper.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming language versions, library versions, or specific solver versions) used for implementing the algorithms or running experiments.
Experiment Setup Yes The γ value in the second order perceptron was simply set to one. B was set to 100 for both the algorithms for both the datasets. [we ]i = min(0, [w ]i) 1 i m, max(0, [w ]i m) m + 1 i 2m. (22) [φe(x, y)]i = +[φ(x, y)]i 1 i m [φ(x, y)]i m m + 1 i 2m (23)