EnergyCompress: A General Case Base Learning Strategy

Authors: Fadi Badra, Esteban Marquer, Marie-Jeanne Lesot, Miguel Couceiro, David Leake

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on 18 benchmarks comparing Energy Compress to 5 reference algorithms for case base maintenance support the benefit of the proposed strategy.
Researcher Affiliation Academia Fadi Badra1 , Esteban Marquer2 , Marie-Jeanne Lesot3 , Miguel Couceiro4 and David Leake5 1Universit e Sorbonne Paris Nord, Sorbonne Universit e, INSERM, Limics, 93000, Bobigny, France 2CRIL CNRS, Universit e d Artois, France 3Sorbonne Universit e, CNRS, LIP6, F-75005 Paris, France 4IST, University of Lisbon, INESC-ID, Lisbon, Portugal 5Luddy School, Indiana Unversity, Bloomington, IN, USA
Pseudocode Yes Algorithm 1 Energy Compress case deletion procedure. C(c, θ, Tref) depends on Pred and the associated EP red θ .
Open Source Code Yes Code for reproducing the experiments is available at: https:// github.com/EMarquer/Me ATCube/tree/maintenance benchmark
Open Datasets Yes The case base learning methods CNNR, ENN, XLDIS, LSSm, IB3, and Energy Compress were tested on 18 UCI datasets
Dataset Splits Yes We apply 10-fold cross validation with stratified splitting. For each dataset D, each fold is constructed by sampling three distinct subsets CB0, Tref, and Ttest from D. The candidate base CB0 serves as the initial case base. The reference set Tref is used by Energy Compress to learn CBf from CB0. The test set Ttest is not used for learning, but only to measure the accuracy of each CBP algorithm before and after learning took place. The sizes for these sets are taken to be |Tref| = |Ttest| = min(100, .2 |D|), and |CB0| = min(50, |D| (|Tref| + |Ttest|)).
Hardware Specification No The paper does not provide specific hardware details used for running its experiments. It mentions using general computational resources implicitly by running experiments but lacks information about CPU, GPU models, or other specific hardware components.
Software Dependencies No The SVMs use the implementation and default parameters from scikit-learn5 to fit the kernel and to estimate the probabilities.
Experiment Setup Yes The learning algorithm Learn is applied to learn CBf from CB0, with a hinge margin λ = 0.1. The similarity measures σS and σR remain fixed in the process, only the case base CBf is learned. ... For each classification task, the initial parameters θ0 = (σS, σR, CB0) are chosen as follows. The outcome space R is the set of class labels ri, and σR is the class membership similarity measure, such that σR(ri, rj) = 1 if ri = rj, and 0 otherwise. For Co AT, Ct Co AT, and k-NN, the similarity measure σS is chosen to be the decreasing function σS(si, sj) = e d(si,sj) of the Euclidean distance d. ... and k NN (with k = 7)