Initializing Services in Interactive ML Systems for Diverse Users

Authors: Avinandan Bose, Mihaela Curmei, Daniel Jiang, Jamie H. Morgenstern, Sarah Dean, Lillian Ratliff, Maryam Fazel

NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theory is complemented by experiments on real as well as semi-synthetic datasets.
Researcher Affiliation Academia Avinandan Bose University of Washington EMAIL Mihaela Curmei University of California Berkeley EMAIL Daniel L. Jiang University of Washington EMAIL Jamie Morgenstern University of Washington EMAIL Sarah Dean Cornell University EMAIL Lillian J. Ratliff University of Washington EMAIL Maryam Fazel University of Washington EMAIL
Pseudocode Yes Algorithm 1 Ac QUIre Adaptively Querying Users for Initialization
Open Source Code Yes All our code is available at https://anonymous.4open.science/r/ Multi Service Initialization-A422
Open Datasets Yes using 2021 US Census data... online movie recommendation task using the Movielens10M dataset... Movielens10M data set [19]
Dataset Splits No The paper explicitly mentions 'train set' and 'test set' for the Movielens dataset with a 50/50 split, but it does not specify a separate 'validation' set for model tuning.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications. In the NeurIPS Paper Checklist, it states: 'All our experiments can be run on personal devices.'
Software Dependencies No The paper mentions using 'Surprise (a Python toolkit [23])' for movie recommendations but does not specify its version number. It also mentions 'least squares regression' but not a specific library or its version.
Experiment Setup No The paper describes the general steps of the experiment, such as user selection strategies and how services are updated. However, it does not provide specific numerical details for hyperparameters (e.g., learning rates, batch sizes, number of epochs) or system-level training settings for the models used in the experiments.