Alibi Explain: Algorithms for Explaining Machine Learning Models
Authors: Janis Klaise, Arnaud Van Looveren, Giovanni Vacanti, Alexandru Coca
JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The library comes with extensive documentation of both usage and theoretical background of methods, and a suite of worked end-to-end use cases. Figure 1 shows outputs for a selection of supported explanation algorithms. |
| Researcher Affiliation | Collaboration | Janis Klaise EMAIL Arnaud Van Looveren EMAIL Giovanni Vacanti EMAIL Seldon Technologies Limited Alexandru Coca EMAIL University of Cambridge |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. Code Snippet 1 demonstrates API usage but is not pseudocode for an algorithm. |
| Open Source Code | Yes | We introduce Alibi Explain, an open-source Python library for explaining predictions of machine learning models (https://github.com/Seldon IO/alibi). |
| Open Datasets | Yes | Figure 1: A selection of supported explanation algorithms. Top left: Anchor explanation on image classification explaining the prediction Persian cat . Top right: Integrated Gradients attributions on a sentiment prediction task explaining the prediction positive . Bottom left: Counterfactual explanations of (a) MNIST digit classification and (b) Income classification. Bottom right: ALE feature effects for a logistic regression model on the Iris dataset. |
| Dataset Splits | No | The paper mentions datasets like MNIST and Iris for demonstrating explanation algorithms but does not provide specific details on how these datasets were split (e.g., percentages, sample counts, or methodology for training/test/validation sets). |
| Hardware Specification | No | The paper discusses software platforms and distributed computing (Ray, Seldon Core, KFServing) but does not provide any specific hardware details such as GPU or CPU models, memory specifications, or cloud instance types used for experiments. |
| Software Dependencies | No | The paper mentions "pytest under various Python versions" and references frameworks like TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2019), and Ray (Moritz et al., 2018). However, it does not provide specific version numbers for these software dependencies used for their experiments or library development, only general references or vague statements about 'various Python versions'. |
| Experiment Setup | No | The paper focuses on introducing an explainability library and demonstrating its algorithms. It does not provide specific experimental setup details, hyperparameters, or training configurations for any underlying machine learning models whose predictions are explained. |