Parallel Belief Contraction via Order Aggregation

Authors: Jake Chandler, Richard Booth

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We propose a general method for extending serial iterated belief change operators to handle parallel change based on an n-ary generalisation of Booth & Chandler s Team Queue binary order aggregators. An axiomatic characterisation of this generalisation is provided, which will be of interest independently of the question of parallel change. The paper contains several theorems, propositions, and corollaries (e.g., Theorem 1, Proposition 1, Corollary 1) and focuses on theoretical constructs and proofs rather than empirical studies or data analysis.
Researcher Affiliation Academia Jake Chandler1 , Richard Booth2 1La Trobe University 2Cardiff University EMAIL, EMAIL
Pseudocode No The paper describes methods using definitions, theorems, and logical notations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about code availability, links to code repositories, or mention of code in supplementary materials for the methodology described.
Open Datasets No The paper is theoretical, focusing on belief contraction models and logical frameworks. It does not mention the use of any datasets for experimental evaluation, nor does it provide access information for any open datasets.
Dataset Splits No The paper is theoretical and does not involve experimental evaluation using datasets, therefore, there is no mention of dataset splits.
Hardware Specification No This is a theoretical paper and does not describe any experimental setup or hardware used for computation.
Software Dependencies No This is a theoretical paper that focuses on logical frameworks and mathematical proofs. It does not describe any software dependencies with specific version numbers relevant to experimental implementation.
Experiment Setup No The paper is theoretical in nature, presenting logical constructs and proofs, and therefore does not describe any experimental setup, hyperparameters, or training settings.