An Evolutionary Algorithm for Black-Box Adversarial Attack Against Explainable Methods

Authors: Phoenix Neale Williams, Jessica Schrouff, Lea Goetz

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on medical and natural image datasets demonstrate that our method outperforms state-of-the-art techniques, exposing critical vulnerabilities in current XAI systems and highlighting the need for more robust interpretability frameworks. ... Section 4 presents the evaluation metrics and experimental results, followed by a comprehensive analysis.
Researcher Affiliation Industry Phoenix Williams EMAIL GSK.ai Jessica Schou! jessica.v.schrou!@gsk.com GSK.ai Lea Goetz EMAIL GSK.ai
Pseudocode No The paper describes the proposed method in Section 3 and states, "The overall structure of our approach is summarized in Figure 6 within the appendix." Figure 6 is a diagram, not pseudocode or an algorithm block. The methodology is described in prose.
Open Source Code No To encourage further research in this domain, we will publicly release our implementation, datasets, and evaluation scripts upon acceptance of the paper.
Open Datasets Yes The HAM10000 dataset Tschandl et al. (2018) contains approximately 10,000 dermatology images... The Br35h dataset Hamada (2020) consists of 3,000 brain MRI scans... The COVID-QU-Ex dataset Tahir et al. (2021) provides 33,920 chest X-rays...
Dataset Splits Yes The datasets are divided into training, validation, and testing subsets with a ratio of 70%/10%/20%.
Hardware Specification Yes All experiments were executed on an NVIDIA RTX A6000 GPU system.
Software Dependencies No Given the relatively small size of our medical datasets, we fine-tune models pre-trained on Image Net using the Py Torch library Paszke et al. (2019). No specific version number for PyTorch is provided, nor are other software dependencies mentioned with versions.
Experiment Setup Yes Each model undergoes fine-tuning over 10 epochs, with a batch size of 32 and a learning rate of 1 × 10−4, utilizing the ADAM optimizer Kingma & Ba (2015) and cross entropy loss. ... our approach involves three adjustable parameters: ε, N, and Max Diameter . The specific values for these parameters are listed in Table 4, with justification provided in Section 4.5.