Computing Voting Rules with Improvement Feedback

Authors: Evi Micha, Vasilis Varsamis

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement our theoretical findings with experimental results, providing further insights into the practical implications of improvement feedback for preference aggregation. Lastly, in Section 6, we compare the two types of feedback through simulations. Interestingly, contrary to the theoretical results, t-improvement feedback queries turn to be more efficient in some cases for implementing rules such as Copeland or Borda that are learnable from pairwise comparison feedback but are not implementable by t-improvement feedback in the theoretical worst case.
Researcher Affiliation Academia 1Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, California, USA. Correspondence to: Evi Micha <EMAIL>, Vasilis Varsamis <EMAIL>.
Pseudocode No The paper includes mathematical definitions, theorems, and proofs but does not present any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code for the experimental part can be found in https://github.com/Vasilis Var00/Computing-Voting-Rules-with Improvement-Feedback
Open Datasets No The paper describes using synthetic data generated from models like "impartial culture (IC)", "Mallows model", and "Plackett-Luce (PL) model". It does not provide concrete access information (link, DOI, repository) for any specific dataset used in the experiments; rather, it describes the generative models themselves.
Dataset Splits No The paper mentions generating data based on models and setting the number of candidates m = 20 and varying the number of agents from 50 to 1000 in increments of 50. However, it does not specify any training, validation, or test splits for these generated data instances.
Hardware Specification No The paper describes conducting simulations and experiments but does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for these computations.
Software Dependencies No The paper states: "For our experiments, we used the Python package pref-voting." While it names a software package, it does not provide a specific version number, which is required for a 'Yes' answer.
Experiment Setup Yes For all the experiments, we set the number of candidates m = 20 and vary the number of agents from 50 to 1000 in increments of 50. Here, we illustrate the results for the uniform improvement feedback distribution, and in Appendix I, we surprisingly show that all three improvement feedback distributions behave identically. We also set t = 5, and in Appendix I, we show that the results are quantitatively similar for different values of t. For each experiment, we iterate over 500 iterations.