Learning Optimal Auctions with Correlated Value Distributions
Authors: Da Huo, Zhenzhe Zheng, Fan Wu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the proposed auction mechanism can represent almost any strategy-proof auction mechanism, and outperforms the auction mechanisms wildly used in the correlated value settings. |
| Researcher Affiliation | Academia | Department of Computer Science and Engineering, Shanghai Jiao Tong University, EMAIL, EMAIL |
| Pseudocode | No | The algorithm is given in the Appendix. The main text does not contain a structured pseudocode or algorithm block. |
| Open Source Code | No | No explicit statement about open-source code availability or a repository link is provided in the main text of the paper. |
| Open Datasets | No | We generate irregular value distributions to evaluate auctions with correlated values under the most general conditions. Specifically, first we generate random multivariate normal distributions: we randomly sample within the interval [-0.2, 0.2] to create an n n random matrix A, and the covariance matrix of the distribution is AT A. The mean vector of the distribution is sampled from the interval [0, 1]. After that, we obtain two multivariate normal distributions D1 and D2 using the aforementioned method. |
| Dataset Splits | Yes | Training spans 100,000 iterations with a minibatch size of B = 128 on a dataset comprising 100,000 training and 10,000 evaluation samples. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) are provided for the experimental setup. |
| Software Dependencies | No | We implement CAN using Tensor Flow |
| Experiment Setup | Yes | Hyperparameters We implement CAN using Tensor Flow and configure a MIN-MAX neural network with |Z| = 4 groups of |Q| = 4 linear functions. Training spans 100,000 iterations with a minibatch size of B = 128 on a dataset comprising 100,000 training and 10,000 evaluation samples. We employ Adam optimizer with a learning rate of η = 0.001. |