Declarative Approaches to Outcome Determination in Judgment Aggregation

Authors: Ari Conati, Andreas Niskanen, Matti Järvisalo

JAIR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide an open-source implementation of the algorithms, and empirically evaluate them using real-world preference data. We compare the performance of our implementation to a recent approach which makes use of declarative solver technology for answer set programming (ASP). The results demonstrate that our approaches scale significantly beyond the reach of the ASP-based algorithms for all of the judgment aggregation rules considered.
Researcher Affiliation Academia Ari Conati EMAIL Andreas Niskanen EMAIL Matti J arvisalo EMAIL Department of Computer Science, University of Helsinki, Finland
Pseudocode Yes Algorithm 1 Generic Max SAT-based algorithm for outcome determination. Algorithm 2 Max SAT-based algorithm for outcome determination under the reversal scoring rule. Algorithm 3 Pref SAT-based CEGAR for outcome determination under the Condorcet rule. Algorithm 4 Pref SAT-based CEGAR for outcome determination under the ranked agenda rule. Algorithm 5 Max SAT-based algorithm for outcome determination under the Lexi Max rule.
Open Source Code Yes We provide an open-source implementation of all of the procedures for outcome determination developed in this work (https://bitbucket.org/coreo-group/satcha).
Open Datasets Yes We empirically evaluate the performance of our implementation on preference data from the Pref Lib (Mattei & Walsh, 2013) reference library on real-world voting data arising from, e.g., elections and surveys. Specifically, we selected all strict total orderings available in https://www.preflib.org/static/data/types/soc.zip (as of March 13, 2024)
Dataset Splits No The paper uses entire preference profiles as individual test cases from the Pref Lib library, and for each profile, it checks whether an arbitrarily chosen first candidate is a winner. This does not involve traditional training/test/validation dataset splits.
Hardware Specification Yes All of the experiments reported on in this section were run on Intel Xeon E5-2670 CPUs and 57-GB memory under RHEL 8.5 using a per-instance 30-minute time and 16-GB memory limit.
Software Dependencies Yes As the Max SAT solver, we employ the freely-available UWr Max SAT (version 1.5.3; Piotr ow, 2020)... In the integer programming instantiations, we use the state-of-the-art commercial IP solver Gurobi (https://www.gurobi.com/) version 9.5.2... In the experiments, we use the state-of-the-art ASP solver Clingo (v5.6.1; Gebser et al., 2016).
Experiment Setup Yes All of the experiments reported on in this section were run on Intel Xeon E5-2670 CPUs and 57-GB memory under RHEL 8.5 using a per-instance 30-minute time and 16-GB memory limit. Specifically, we selected all strict total orderings available in https://www.preflib.org/static/data/types/soc.zip (as of March 13, 2024), filtering out those which have a (weak) Condorcet winner in order to guarantee removal of instances which can be considered trivial. This resulted in our final set consisting of 405 preference profile datasets as the basis of our benchmarks. Using this data, we benchmark the algorithm implementations on the winner determination problem, i.e., the problem of whether a given candidate is a winner in terms of a given input profile and aggregation rule, for each of the judgment aggregation rules considered in this work. Specifically, we determine whether the (arbitrarily chosen) first candidate is a winner for the input profile and rule.