Iterated Belief Change as Learning

Authors: Nicolas Schwind, Katsumi Inoue, Sébastien Konieczny, Pierre Marquis

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental we present learning and inference algorithms suited to this learning model and we evaluate them empirically. Our findings highlight two key insights: first, that iterated belief change can be viewed as an effective form of online learning, and second, that the well-established axiomatic foundations of belief change operators offer a promising avenue for the axiomatic study of classification tasks. [...] We compare them to standard ML methods on benchmark datasets. Results show the improvement-based model slightly outperforms Naive Bayes and achieves better recall than most existing methods.
Researcher Affiliation Academia 1National Institute of Advanced Industrial Science and Technology, Tokyo, Japan 2National Institute of Informatics, Tokyo, Japan 3Univ. Artois, CNRS, CRIL, Lens, France 4Institut Universitaire de France EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes algorithms in prose, e.g., 'we present learning and inference algorithms suited to this learning model' and discusses 'a learning algorithm (the computation of τ) and an inference algorithm (the prediction for any instance ωX ΩX given (D, τ))', but does not provide them in structured pseudocode or algorithm blocks.
Open Source Code Yes The proofs and code used to retrieve datasets and conduct experiments are available in [Schwind et al., 2025]. [...] [Schwind et al., 2025] Nicolas Schwind, Katsumi Inoue, Sébastien Konieczny, and Pierre Marquis. Iterated belief change as learning: Supplementary material. https: //github.com/nicolas-schwind/Iterated Belief Change ML, 2025.
Open Datasets Yes The experimental protocol involved selecting 58 binary classification datasets from the UCI repository,5 with each dataset containing up to 12,684 instances and up to 1,203 numerical or categorical features. [...] 5https://archive.ics.uci.edu/datasets/
Dataset Splits Yes A 10-fold cross-validation has been conducted: each dataset was split into ten random samplings, with a 90%/10% division for training and test sets.
Hardware Specification No The paper mentions implementing models in Python and using the scikit-learn library, and discusses running experiments, but provides no specific details about the hardware (e.g., CPU, GPU models, memory) used for these experiments.
Software Dependencies No We implemented in Python the improvement-based learning model ( +1,BH,mba) (simply denoted by onward). Its predictive performance was compared against the following standard ML models (learned using the scikit-learn library [Pedregosa et al., 2011] and considering default parameters).
Experiment Setup No The paper states that standard ML models were learned using "default parameters" of the scikit-learn library, and describes preprocessing steps (numerical attributes re-scaled linearly to the interval [0, 10] with integer values, categorical attributes left unchanged) and the use of a modified weighted Hamming distance for their model. However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed system-level training configurations for their model or the baselines beyond 'default parameters'.