Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Multiplayer Performative Prediction: Learning in Decision-Dependent Games
Authors: Adhyyan Narang, Evan Faulkner, Dmitriy Drusvyatskiy, Maryam Fazel, Lillian J. Ratliff
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Synthetic and semi-synthetic numerical experiments illustrate the results. Further experiments are contained in Appendix E. |
| Researcher Affiliation | Academia | Adhyyan Narang EMAIL Department of Electrical and Computer Engineering University of Washington Seattle, WA 98195-4322, USA |
| Pseudocode | Yes | Algorithm 1: Adaptive Gradient Method |
| Open Source Code | Yes | The data and code used in this paper are publicly available (https://github.com/ratlifflj/performativepredictiongames). |
| Open Datasets | Yes | We use data from a prior Kaggle competition to set up the semi-synthetic simulation environment. The data used in this paper is publicly available (https://www.kaggle.com/brllrb/uber-and-lyft-dataset-boston-ma). |
| Dataset Splits | No | The paper describes generating problem instances and constructing semi-synthetic data by aggregating rides into bins and using empirical distributions, but it does not provide specific training/test/validation dataset splits (percentages, sample counts, or explicit split files). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'scipy.sparse.random' and 'Mathematica or sympy' but does not provide specific version numbers for these or any other software components used in the experiments. |
| Experiment Setup | Yes | In Figure 1a we show the iteration complexity of the norm-square error to the Nash equilibrium for the stochastic gradient method, the adaptive gradient method, and players independently playing according to derivative free optimization. Instance generation. We randomly generate problem instances namely, the parameters Ai, A i for i [n] by using scipy.sparse.random which allows for the sparsity of the matrix to be set in addition to randomly generating the matrix parameters. Furthermore, we set θ Rd m with entries distributed as N(0, 0.01), σ2 w = 0.1, and ϕi(θ) = θ 1d 1 for d = 2 and m = 5 for the experiments in Figure 1. Throughout the remainder of this section, we set λ1 = λ2 = 1. In the experiments presented, we estimate the matrices Ai and A i that govern the performative effects from the data. The details of this estimation are outlined in detail Appendix E, and the heuristics used to set-up the semi-synthetic model can be changed in the code-base. The semi-synthetic model is constructed such that there are no performative effects across different locations. This amounts to zero elements offthe diagonals of the matrices Ai and A i. We run each of the algorithms in Section 6 from twenty random initial conditions, and compute the error between the trajectory of the algorithm and the Nash equilibrium. The parameters used in the algorithms are set based on the respective theorems. |