Interpretable DNFs
Authors: Martin C. Cooper, Imane Bousdira, Clément Carbonnel
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 6, we present in Section 6 a practical algorithm for learning nested k-DNFs, and show empirically that classifiers constructed this way are competitive with decision trees on various datasets. ... 6 Experiments ... Test accuracy (%) Table 2: Test accuracy of depth-k decision trees and nested k-DNFs |
| Researcher Affiliation | Academia | 1IRIT, University of Toulouse, France 2IRIT, INP Toulouse, France 3LIRMM, CNRS, University of Montpellier, France |
| Pseudocode | Yes | Algorithm 1 Construct matrix Input: k, dataset Output: matrix L 1: for i = 0 to k 1 do 2: for j = 0 to k 1 do 3: if i = 0 then 4: limit = 0 5: else 6: limit = min(k j, (2(n j)/i) 1 ) 7: end if \ Ec1(t): nb. examples in class 1 that satisfy t \ Ec0(t): nb. examples in class 0 that satisfy t 8: Calculate G = Ec1(ℓi,0...ℓi,j) Ec0(ℓi,0...ℓi,j) for each literal not in Li,0:j Li,0:j L0:i,0:limit 9: Take as ℓi,j the literal that gives the greatest G 10: end for 11: end for 12: return matrix L |
| Open Source Code | Yes | 2The code is available in this Git Hub repository |
| Open Datasets | Yes | A collection of datasets from the UCI repository and Kaggle are considered, which have been used to evaluate a wide range of learning algorithms. ... Table 1 shows, for each dataset, the number of data examples and the number of boolean features. |
| Dataset Splits | Yes | For a given dataset, 80% of the dataset was used for training and 20% for testing, except for the Monks datasets, where the test set is provided separately and consists of 432 examples, consisting of all possible combinations of the feature-values. The average performance across five split experiments is reported. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. It only discusses the algorithms and their performance. |
| Software Dependencies | No | The paper mentions 'CART [Breiman et al., 1984]' as a comparison algorithm but does not specify any software dependencies with version numbers for their own implementation or the CART algorithm used. |
| Experiment Setup | No | The paper describes the heuristic algorithm and parameters like 'k' (maximum depth for decision trees), but lacks specific hyperparameters commonly found in machine learning experimental setups such as learning rates, batch sizes, optimizers, or number of epochs. |