Learning to Manipulate Under Limited Information

Authors: Wesley H. Holliday, Alexander Kristoffersen, Eric Pacuit

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We trained over 100,000 neural networks of 26 sizes to manipulate against 8 different voting methods, under 6 types of limited information, in committee-sized elections with 5 21 voters and 3 6 candidates. We find that some voting methods, such as Borda, are highly manipulable by networks with limited information, while others, such as Instant Runoff, are not, despite being quite profitably manipulated by an ideal manipulator with full information.
Researcher Affiliation Academia 1University of California, Berkeley 2University of Maryland EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes implementation details and processes in regular paragraph text (e.g., 'Implementation Details' section), but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/epacuit/ltm
Open Datasets Yes Finally, we used the normalized Mallows model from Boehmer, Faliszewski, and Kraic 2023, 2.2 with the dispersion parameter set to φ = .8.
Dataset Splits No For a given generation, we used the same initialization of MLP weights and the same training, validation, and evaluation profiles for every MLP for n voters and m candidates. Across generations, we varied the initialization of MLP weights and used different training, validation, and evaluation profiles... we measure the average profitability on a validation batch of 4,096 elections. While validation batch size is mentioned, overall training/validation/test split sizes (e.g., percentages or total counts) are not explicitly provided for reproducibility.
Hardware Specification Yes Training and evaluation were parallelized across nine local Apple computers with Apple silicon, the most powerful equipped with an M2 Ultra with 24-core CPU, 76-core GPU, and 128GB of unified memory, running mac OS 13, as well as up to sixteen cloud instances with Nvidia A6000 or A10 GPUs running Linux Ubuntu 18.04.
Software Dependencies Yes All code was written in Python using Py Torch, version 2.0.1, and the pref voting library (pypi.org/project/pref-voting/), version 0.4.42 or later.
Experiment Setup Yes For the final training run reported here, we use a batch size of 512 and a learning rate of 6e-3. We train all models for at least 220 iterations and then terminate training with an early stopping rule: after every 20 iterations, we measure the average profitability on a validation batch of 4,096 elections. If 10 validation steps pass without an improvement of at least .001 in average profitability of the submitted ranking, we terminate training.