Optimal Auction Design in the Joint Advertising
Authors: Yang Li, Yuchao Ma, Qi Qi
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this chapter, we present empirical experiments to demonstrate the effectiveness of Bundle Net. All experiments are conducted on a Linux machine equipped with NVIDIA Graphics Processing Unit cores. We compare Bundle Net with the following baselines: Optimal Joint Auction Mechanism... VCG... JReg Net... The experimental results, reported in Table 1 and Table 3, illustrate the performance of different auction mechanisms under these settings. The experimental results demonstrate that Bundle Net consistently approximates the optimal mechanism across various settings, whereas JReg Net does not always exhibit such proximity. |
| Researcher Affiliation | Academia | 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Beijing Key Laboratory of Research on Large Models and Intelligent Governance 3Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE. Correspondence to: Qi Qi <EMAIL>. |
| Pseudocode | Yes | Alg. Setting Algorithm 1 Bundle Net Training Input: Minibatches B1, . . . , BT of size C Parameters: t, ρt > 0, γ > 0, η > 0, Γ N, K N Initialize: w0 Rd, λ0 Rm for t = 0 to T do Receive Bt = {G(1), . . . , G(B)} Initialize v (ℓ) r Vr, v (ℓ) s Vs, ℓ [B], r R, s S for r = 0 to Γ do for ℓ= 1 to B do ℓ [C], i R S : v (ℓ) i v (ℓ) i + γ v i[uw i (v(ℓ) i ; (v i, v(ℓ) i))] v i=v (ℓ) i end for end for Compute Lagrangian gradient and update wt wt+1 wt η w Lρt(wt, µt) if t is a multiple of H then µt+1 e µt e + ρt c rgt e(wt+1), e E else end if end for |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code, nor does it include a link to a code repository. |
| Open Datasets | No | We generate training and test data from different distributions. The training set consists of 204,800 samples, while the test set contains 20,480 samples. To assess the performance of each method, we utilize the average empirical ex-post regret of the mechanism: c rgt := 1 2n Pi R S c rgti. Since in real-world scenarios, v0 = 0, we also consider empirical revenue: rev := 1 L PL ℓ=1 Pe E pe(v(ℓ)). In all synthetic data experiments, the joint relationship matrix between stores and brands is randomly generated for each search request sample. |
| Dataset Splits | Yes | We generate training and test data from different distributions. The training set consists of 204,800 samples, while the test set contains 20,480 samples. |
| Hardware Specification | No | All experiments are conducted on a Linux machine equipped with NVIDIA Graphics Processing Unit cores. |
| Software Dependencies | No | The paper mentions using the Adam optimizer, but no specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are provided. |
| Experiment Setup | Yes | We optimize the constrained objective (4) by introducing the augmented Lagrangian method. Our loss function is formulated as follows: e E pe(v(ℓ)) + X e E µe c rgt e(w) + ρ c rgt e(w) 2 . where w represents the neural network parameters, λe represents the Lagrangian multipliers associated with the constraints, while ρ is a hyper-parameter controlling the weight of the quadratic penalty term. During optimization, we utilize the Adam optimizer to update our parameters w as well as the misreports v (ℓ) r and v (ℓ) s in turn, i.e., we update wnew arg minw Lρ(wold, µold) and update µnew e = µold e + ρ rgte(wnew), e E. The detailed algorithmic specifications can be found in Algorithm 1. |