Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Learning from Rational Behavior: Predicting Solutions to Unknown Linear Programs

Authors: Shahin Jabbari, Ryan M. Rogers, Aaron Roth, Steven Z. Wu

NeurIPS 2016 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give mistake bound learning algorithms in two settings: in the ๏ฌrst, the objective of the LP is known to the learner but there is an arbitrary, ๏ฌxed set of constraints which are unknown... In the second setting, the objective of the LP is unknown, and changing in a controlled way. The constraints of the LP may also change every day, but are known.
Researcher Affiliation Academia University of Pennsylvania {jabbari@cis, ryrogers@sas, aaroth@cis, wuzhiwei@cis}.upenn.edu
Pseudocode No The paper describes the 'Learn Edge', 'Learn Hull', and 'Learn Ellipsoid' algorithms in prose, but it does not provide them in structured pseudocode blocks.
Open Source Code No The paper does not provide any statements about open-sourcing code or links to repositories.
Open Datasets No The paper is theoretical and does not use or refer to any specific datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not involve dataset splits like training, validation, or test sets.
Hardware Specification No The paper is theoretical and does not describe the hardware used for any experiments.
Software Dependencies No The paper mentions mathematical algorithms like the 'Ellipsoid algorithm' and 'LP solver' but does not specify any particular software or version numbers that are required dependencies for reproduction.
Experiment Setup No The paper is theoretical and focuses on algorithm design and analysis; it does not describe an experimental setup with hyperparameters or specific training configurations.