Parallel Belief Revision via Order Aggregation

Authors: Jake Chandler, Richard Booth

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We draw on recent work on iterated parallel contraction to offer a method for extending serial iterated belief revision operators to handle parallel change. Based on a family of order aggregators known as Team Queue aggregators, it provides a principled way to recover the independently plausible properties that can be found in the literature, without yielding the more dubious ones. Due to space limitations, proofs are provided in a longer version of the paper, which can be accessed online at https://arxiv.org/abs/2505.13914.
Researcher Affiliation Academia Jake Chandler1 , Richard Booth2 1La Trobe University 2Cardiff University EMAIL, EMAIL
Pseudocode No The paper provides formal definitions (Definition 2 and 3) of aggregators and their properties, which are mathematical specifications rather than structured pseudocode or algorithm blocks. For instance, Definition 2 describes a Team Queue aggregator inductively: "is a Team Queue (TQ) aggregator iff, for each profile P = 1, . . . n , there exists a sequence a P(i) i N such that = a P(i) {1, . . . , n} for each i and the ordered partition T1, T2, . . . , Tm of indifferences classes cor-responding to is constructed inductively as follows: j a P(i) min( j, \ where m is minimal s.t. S i m Ti = W." This is a formal definition, not pseudocode.
Open Source Code No The paper makes no mention of open-source code. It states: "Due to space limitations, proofs are provided in a longer version of the paper, which can be accessed online at https://arxiv.org/abs/2505.13914", but this refers to proofs, not code.
Open Datasets No The paper is theoretical, focusing on belief revision theory, logical postulates, and formal models. It uses abstract concepts like "belief state Ψ" and "sentences in L" and presents conceptual "examples" (e.g., Example 1, Example 2) as thought experiments, not as actual empirical datasets. Therefore, no datasets are used or made available.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets. Consequently, there is no mention or discussion of dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require specific hardware. There is no mention of GPUs, CPUs, or any other computing hardware specifications.
Software Dependencies No The paper is theoretical, focusing on logical frameworks and formal properties of belief revision operators. It does not describe any computational experiments or implementations that would necessitate the use of specific software or libraries with version numbers.
Experiment Setup No The paper is theoretical and does not describe any empirical experiments or their setup. There are no hyperparameters, training configurations, or system-level settings mentioned.