GEFA: A General Feature Attribution Framework Using Proxy Gradient Estimation

Authors: Yi Cai, Thibaud Ardoin, Gerhard Wunder

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Compared to traditional sampling-based Shapley Value estimators, GEFA avoids potential information waste sourced from computing marginal contributions, thereby improving explanation quality, as demonstrated in quantitative evaluations across various settings.5. Experiments
Researcher Affiliation Academia 1Department of Mathematics and Computer Science, Freie Universit at Berlin, Berlin, Germany. Correspondence to: Yi Cai <EMAIL>.
Pseudocode Yes Algorithm 1 GEFA Explanation Scheme Algorithm 2 Smoothing Enhanced Mask Sampling
Open Source Code Yes 2Code is available at: https://github.com/caiy0220/GEFA
Open Datasets Yes Three datasets are adopted for text classification tasks: Amazon Review Polarity (Mc Auley & Leskovec, 2013), STS-2, and QNLI (Wang et al., 2019). The image classification task is set up with Image Net (Russakovsky et al., 2015)
Dataset Splits Yes Without losing generality, we adopted a lightweight model for image classification and downsampled the dataset into 2000/400/400 partitions for training, validation, and test sets to ensure feasibility and efficiency.
Hardware Specification Yes Processor: Intel i9-10980XE, 18 cores Memory: 32GB DDR4 GPU: NVIDIA RTX A5500, 24GB
Software Dependencies Yes The primary packages were Numpy 1.26.4, Py Torch of version 2.5.0, and Torchvision 0.20.0. The CUDA version was 12.2 for GPU support.
Experiment Setup Yes For all test cases, the query budget for the black-box explainers is 500, given the relatively smaller feature space; the interpolation step for IG is set to 50. The query budget of the black-box approaches is increased to 5000 due to the considerably larger input feature spaces, which are 299 299 and 224 224 for Inception V3 and Vi T, respectively.