Procurement Auctions via Approximately Optimal Submodular Optimization

Authors: Yuan Deng, Amin Karbasi, Vahab Mirrokni, Renato Paes Leme, Grigoris Velegkas, Song Zuo

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically evaluate the welfare performance and running time trade-offs of different mechanisms on a publicly available coverage problem. Our instances are constructed using a bipartite graph from SNAP (https://snap.stanford.edu/ data/wiki-Vote.html). ... We generate 10000 random instances per (n, s). ... Figure 1 shows the running time comparison of different methods and Figure 2 compares the welfare outcomes.
Researcher Affiliation Collaboration 1Google Research 2Yale University. Correspondence to: Yuan Deng <EMAIL>, Grigoris Velegkas <EMAIL>, Song Zuo <EMAIL >.
Pseudocode Yes Algorithm 1 A meta algorithm A = (G) for submodular optimization Algorithm 2 A feasible mechanism construction for a given meta algorithm A Algorithm 3 A meta algorithm Ao = (G) for online submodular optimization Algorithm 4 Descending auction with a demand oracle D and a step size of ε Algorithm 5 Distorted greedy algorithm (Harshaw et al., 2019) Algorithm 6 Stochastic distorted greedy algorithm (Harshaw et al., 2019) Algorithm 7 Noisy distorted greedy algorithm Algorithm 8 A posted-price mechanism construction for a given meta algorithm Ao Algorithm 9 Descending auction construction for a given meta algorithm Ao and a step size of ε Algorithm 10 A faster implementation of Algorithm 1 when G has a diminishing-return structure Algorithm 11 A faster implementation of Algorithm 2 when G has a diminishing-return structure
Open Source Code No The paper does not provide an explicit statement about releasing source code for the methodology or a link to a code repository.
Open Datasets Yes Our instances are constructed using a bipartite graph from SNAP (https://snap.stanford.edu/ data/wiki-Vote.html).
Dataset Splits No The paper describes generating '10000 random instances per (n, s)' by randomly sampling subsets and scaling costs. It does not provide specific training/test/validation splits for a machine learning model, as the evaluation is on different mechanisms over generated instances.
Hardware Specification No The paper does not mention specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments or computations.
Software Dependencies No The paper mentions solving the optimization problem using a 'mixed integer program (MIP)' but does not specify the solver or its version number.
Experiment Setup Yes To create instances with varying sizes and value-to-cost ratios, for each pair (n, s), we randomly sample subsets of the sets of size n and scale their costs using κ U[s, s2]. We generate 10000 random instances per (n, s).