Computing Perfect Bayesian Equilibria in Sequential Auctions with Verification
Authors: Vinzenz Thoma, Vitor Bosshard, Sven Seuken
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our algorithm on multiple settings with known equilibria and apply it to a new multi-round combinatorial auction. 6 Experiments We evaluate our algorithm in a range of settings with and without known analytical results. Unless otherwise specified, we perform 10 runs per setting, using a partition with uniform intervals and a grid size of 100 for our piecewise-constant strategies, 100 iterations of iterated best responses to compute BNEs, and up to 100,000 MC samples for search and 200,000 for verification. Our algorithm is embarrassingly parallelizable, as inherently all ϖt in round t can be solved independently of each other. Therefore, the wall clock time for solving each setting is a small fraction of the reported core hours. Below we describe the different settings and our corresponding results. Table 1: Average L2 distances (with standard error below 10 5) to the known equilibrium and the runtime (in core hours). |
| Researcher Affiliation | Academia | 1ETH Zurich, 2University of Zurich, 3ETH AI Center EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Recursive PBE Computation |
| Open Source Code | Yes | We provide our source code in the supplementary material. |
| Open Datasets | No | The paper describes generating synthetic data for its experiments, e.g., "single-minded bidders with values uniformly distributed on [0, 1]" and "bidders costs are distributed uniformly on [1, 2]". It does not reference any external, publicly available datasets with access information. |
| Dataset Splits | No | The paper uses simulated data generated from uniform distributions, rather than pre-existing datasets that would require explicit training/testing/validation splits. Therefore, no dataset split information is provided. |
| Hardware Specification | Yes | We ran all experiments on a Debian compute cluster with 20 nodes, each node having 128 GB RAM and two Intel E5-2680 2.80GHz processors for a total of 40 cores. |
| Software Dependencies | No | The paper mentions methods like "pattern search" and "Monte Carlo (MC) integration" but does not specify any particular software libraries, frameworks, or their version numbers used for implementation. |
| Experiment Setup | Yes | Unless otherwise specified, we perform 10 runs per setting, using a partition with uniform intervals and a grid size of 100 for our piecewise-constant strategies, 100 iterations of iterated best responses to compute BNEs, and up to 100,000 MC samples for search and 200,000 for verification. Our algorithm is embarrassingly parallelizable, as inherently all ϖt in round t can be solved independently of each other. In the second round, both bidders bid truthfully. In the first round, the global bidder shades the bids for types less than 1.1 due to the uncertainty of winning B in the second round. In contrast, for types larger than 1.1 the global bidder likely to win B as well bids truthfully. This interesting, non-continuous behavior shows the potential of our approach for studying less-understood auction formats. |