Participatory Budgeting Project Strength via Candidate Control

Authors: Piotr Faliszewski, Łukasz Janeczko, Dušan Knop, Jan Pokorný, Šimon Schierreich, Mateusz Słuszniak, Krzysztof Sornat

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide theoretical and experimental results. First, we regard the complexity of candidate control for four well-known voting rules, depending on how the costs of the projects are encoded (either in binary, or in unary, or as unit costs, which means that each project costs the same amount). We show the overview of our results in Table 1. We mention that all our NP-hardness results also imply #P-hardness for respective problems where we ask for a number of solutions (e.g., number of ways in which we can ensure a victory of a given project by deleting a given number of others). This is interesting because solving such problems is necessary for estimating the probability that a project wins if a given randomly-selected set of projects is deleted. On the experimental side, we provide an analysis of real-world participatory budgeting instances, showing what one can learn about them via candidate control. To this end, we provide several performance measures and prove their usefulness in an extensive analysis of instances from Pabu Lib [Faliszewski et al., 2023]. The full version of the paper and the code for our experiments are available at https://github.com/Project PRAGMA/PB-control-IJCAI-2025.
Researcher Affiliation Academia 1AGH University of Krakow 2Czech Technical University in Prague EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes algorithms and their complexity but does not include any clearly labeled pseudocode blocks or algorithms.
Open Source Code Yes The full version of the paper and the code for our experiments are available at https://github.com/Project PRAGMA/PB-control-IJCAI-2025.
Open Datasets Yes In our experiments, we analyze 543 approval-based PB instances from Pabulib [Faliszewski et al., 2023] and explore how different performance measures based on this operation can help with understanding and explanation of outcomes for proposers of losing projects and PB election organizers.
Dataset Splits No The paper analyzes real-world data instances from Pabulib and discusses outcomes for losing projects, but it does not specify training, test, or validation splits, nor does it describe cross-validation setups. The experiments focus on analyzing existing instances rather than training models.
Hardware Specification Yes The experiments were run on computers with two AMD EPYC 7H12, 64-core, 2.6 GHz CPUs, and 256 GB DDR4 3200MT/s RAM.
Software Dependencies No We use the same Python implementation for rules GREEDYAV, GREEDYCOST, and PHRAGM EN as in [Boehmer et al., 2024]. For EQUAL-SHARES, we use our own implementation in C++ that significantly outperforms the available implementations.
Experiment Setup Yes For each rule, every instance, and every combination of r {1, 2, 3} projects in this instance, we determined the winners after deleting these r projects. Evaluating the instances with our rules took us the following number of core-hours: 38000 for PHRAGM EN, 900 for both GREEDYAV and GREEDYCOST, and 5000 for our C++ implementation of EQUAL-SHARES.