The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Authors: Anna Hedström, Philine Lou Bommer, Kristoffer Knutsen Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina MC Höhne
TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our framework through a series of experiments, targeting various open questions in XAI such as the selection and hyperparameter optimisation of quality estimators. |
| Researcher Affiliation | Academia | Anna Hedström1,6, EMAIL Philine Bommer1,6 EMAIL Kristoffer K. Wickstrøm3 EMAIL Wojciech Samek1,2,4 EMAIL Sebastian Lapuschkin4 EMAIL Marina M.-C. Höhne2,3,5,6, EMAIL 1 Department of Electrical Engineering and Computer Science, TU Berlin 2 BIFOLD Berlin Institute for the Foundations of Learning and Data 3 Department of Physics and Technology, Ui T the Arctic University of Norway 4 Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute 5 Department of Computer Science, University of Potsdam 6 UMI Lab, Leibniz Institute of Agricultural Engineering and Bioeconomy e.V. (ATB) |
| Pseudocode | No | The paper describes methods mathematically and in prose but does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our work is released under an open-source license1 to serve as a development tool for XAIand Machine Learning (ML) practitioners to verify and benchmark newly constructed quality estimators in a given explainability context. 1Code is available at the Git Hub repository: https://github.com/annahedstroem/Meta Quantus. |
| Open Datasets | Yes | We use four image classification datasets for our experiments: ILSVRC-15 (i.e., Image Net) (Russakovsky et al., 2015), MNIST (Le Cun et al., 2010), f MNIST (Xiao et al., 2017) and customised-MNIST (i.e., c MINST) (Bykov et al., 2022) |
| Dataset Splits | No | The paper specifies random sampling of test samples (e.g., 'For MNIST and f MNIST, we randomly sample 1024 test samples') and states that models are trained, but does not provide explicit details on the training, validation, and test set splits (e.g., percentages or counts for all splits) needed to reproduce the data partitioning. |
| Hardware Specification | Yes | All experiments were computed on GPUs where we used NVIDIA A100-PCIE 40GB for the toy datasets and NVIDIA A100-PCIE 80GB and Tesla V100S-PCIE-32GB for Image Net dataset. |
| Software Dependencies | No | The paper mentions using the 'Quantus library' and 'Py Torch' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The training of all models is performed in a similar fashion; employing SGD optimisation with a standard cross-entropy loss, an initial learning rate of 0.001 and momentum of 0.9. All models are trained for 20 epochs each. |