On Designing the Optimal Integrated Ad Auction in E-commerce Platforms

Authors: Yuchao Ma, Weian Li, Yuhan Wang, Zitian Guo, Yuejia Dou, Qi Qi, Changyuan Yu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of JINTER Net using both synthetic data and real dataset, and our experimental results show that it significantly outperforms baseline models across multiple metrics. ... Experiments In this section, we conduct a series of experiments to validate the superiority of JINTER Net in integrated ad system.
Researcher Affiliation Collaboration 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2School of Software, Shandong University, Jinan, China 3Baidu Inc., Beijing, China
Pseudocode No The paper describes the JINTER Net architecture and training procedure in text and with a schematic diagram (Figure 1), but it does not include a distinct pseudocode or algorithm block.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes Avito Dataset The Avito public dataset, sourced from the platform avito.ru, includes user search logs over a period of 26 days, covering more than 4.2 million users and 36 million items across over 112 million page views (PVs).
Dataset Splits Yes For the synthetic data, we generate a training sample of 640,000 value profiles and a testing sample of 25,600 profiles. Both the training and testing sets use a minibatch size of 128. ... For our experiments, we partition the dataset as follows: 17-day records serve as the training set; 3-day records are used for validation; and 3-day records are reserved for testing.
Hardware Specification No Our experiments were run on a Linux server equipped with NVIDIA Graphics Processing Unit (GPU) cores. This statement mentions the type of hardware (GPU cores) but does not provide specific model numbers or other detailed specifications.
Software Dependencies No The paper mentions the use of an "Adam optimizer" but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes Both the training and testing sets use a minibatch size of 128. During the training of JINTER Net, we update wt for each minibatch using the Adam optimizer with a learning rate of 0.001. ... We initiate 100 misreports and perform gradient ascent method on them for 2,000 iterations to obtain 100 empirical regrets. ... In the metric Rev+αGMV, the hyperparameter α is set to 1 across different settings.