Inverse Game Theory: An Incenter-Based Approach

Authors: Lvye Cui, Haoran Yu, Pierre Pinson, Dario Paccagnan

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments on three game applications demonstrate that our methods outperform the state of the art. The code, datasets, and supplementary material are available at https://github.com/cuilvye/Incenter-Project.
Researcher Affiliation Academia 1School of Computer Science & Technology, Beijing Institute of Technology 2Department of Computing, Imperial College London 3Dyson School of Design Engineering, Imperial College London cui EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Mirror Descent for Problem (8) Algorithm 2 Primal-Dual Interior-Point for Problem (15)
Open Source Code Yes The code, datasets, and supplementary material are available at https://github.com/cuilvye/Incenter-Project.
Open Datasets Yes The code, datasets, and supplementary material are available at https://github.com/cuilvye/Incenter-Project. Bertrand Competition: Following [Maddux et al., 2023], we generate ϑ by randomly sampling its elements from Gaussian distributions: θ11 Np 1.2, 0.52q, θ12 Np0.5, 0.12q, θ21 Np0.3, 0.12q, θ22 Np 1, 0.52q, and θi3, θi4 Np1, 0.52q for i 1, 2. We take s to be i.i.d. samples from Np5, 1.52q. Given each ˆsj, we solve for the equilibrium prices pˆxj 1, ˆxj 2q using first-order methods. To evaluate different estimation methods, we generate 50 random ϑ. For each ϑ, we create a training dataset p Dtrain and a test dataset p Dtest, both with a size of 500.
Dataset Splits Yes For each ϑ, we create a training dataset p Dtrain and a test dataset p Dtest, both with a size of 500. Each ϑ has a corresponding p Dtrain and p Dtest, both with 500 samples. p Dtrain and p Dtest both contain 500 samples.
Hardware Specification No The paper describes the numerical experiments and evaluation metrics but does not provide specific details about the hardware used to run these experiments.
Software Dependencies No The paper describes algorithms like mirror descent and primal-dual interior-point methods, and mentions specific mathematical tools (e.g., convex optimization, semidefinite programming), but does not list any specific software libraries, frameworks, or solvers with version numbers that were used for implementation.
Experiment Setup No The paper mentions hyperparameters like 'α' for regularization and 'ηt' for step-size in the mirror descent method. However, it does not provide concrete values for these hyperparameters or other system-level training settings (e.g., number of epochs, specific learning rate schedules, batch sizes) that would be needed to reproduce the experiments. It only describes the algorithms and data generation process.