Fast Explanations via Policy Gradient-Optimized Explainer
Authors: Deng Pan, Nuno Moniz, Nitesh V. Chawla
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our framework on image and text classification tasks and the experiments demonstrate that our method reduces inference time by over 97 percent and memory usage by 70 percent compared to traditional model-agnostic approaches while maintaining high-quality explanations and broad applicability. |
| Researcher Affiliation | Academia | Lucy Family Institute for Data & Society, University of Notre Dame Notre Dame, IN 46556 USA EMAIL |
| Pseudocode | Yes | Algorithm 1 PPO for Fast Explanations |
| Open Source Code | No | The paper does not contain any explicit statements about releasing the code, nor does it provide any links to a code repository. The text mentions implementations for baselines (e.g., Fast SHAP) but not for the authors' own method. |
| Open Datasets | Yes | For image classification, we use the Vi T model [Dosovitskiy et al., 2020] fine-tuned on the Image Net dataset [Deng et al., 2009] as the prediction model. ... For text classification, we use the BERT model [Devlin et al., 2018] fine-tuned on the SST2 dataset [Socher et al., 2013] for sentiment analysis. The FEX explainer is finetuned on the Movies Reviews [Zaidan and Eisner, 2008] dataset for one epoch with batch size 256. ... we use an annotated image segmentation dataset [Guillaumin et al., 2014] comprising 4,276 images across 445 categories. |
| Dataset Splits | Yes | Positive AUC and Negative AUC are evaluated on Image Net dataset ... These evaluations are conducted on a randomly selected subset of 5,000 images from the Image Net validation set. ... FEX explainer is finetuned on the full Image Net dataset with 1.3M samples (FEX-1.3M) or a subset of 50,000 samples (FEX-50k) for one epoch. ... For text classification, ... The FEX explainer is finetuned on the Movies Reviews [Zaidan and Eisner, 2008] dataset for one epoch with batch size 256. |
| Hardware Specification | Yes | fine-tuning Vi T on 1.3M Image Net samples for a single epoch takes approximately 5 hours on a single A100 GPU ... All experiments are conducted on the same machine with 8 CPU cores and 1 Nvidia A100 GPU. |
| Software Dependencies | No | The paper mentions using models like Vi T and BERT, and tools like Grad CAM and Att LRP, but does not specify version numbers for any software dependencies, programming languages, or libraries used in their implementation. |
| Experiment Setup | Yes | Unless otherwise specified, in all experiments, the g(x) is set to the same architecture as the predictor f, with appended MLP prediction heads, and the hyperparameters are set to λen = 10 5, λv = 0.5 and λkl = 1. ... For text classification, ... The FEX explainer is finetuned on the Movies Reviews [Zaidan and Eisner, 2008] dataset for one epoch with batch size 256. ... In our experiments, Ks are set to 100 for all model-agnostic baselines. ... the explainer is implemented as a U-Net generating a 14 14 heatmap. |