Diffusion Models for Attribution

Authors: Xiongren Chen, Jiuyong Li, Jixue Liu, Lin Liu, Stefan Peters, Thuc Duy Le, Wentao Gao, Xiaojing Du, Anthony Walsh

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experimental section, we perform a comprehensive comparison of our approach against several benchmark methods. First, we provide an intuitive comparison based on the generated attribution maps. Then, we evaluate the performance of each method across multiple quantitative metrics.
Researcher Affiliation Collaboration 1University of South Australia 2Green Triangle Forest Industries Hub EMAIL,EMAIL,EMAIL
Pseudocode Yes Algorithm 1: Training αθ for optimizing z
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the methodology described.
Open Datasets Yes We randomly selected 1,000 images from Image Net (Deng et al. 2009) and conducted the aforementioned experiments. The statistical results are shown in Table 1.
Dataset Splits No The paper mentions using 1,000 images from ImageNet for experiments and evaluating performance metrics, but it does not specify explicit training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as exact GPU/CPU models or processor types.
Software Dependencies No The paper mentions using Captum (Kokhlikyan et al. 2020) and Py Torch (Paszke et al. 2019) toolkits but does not specify their exact version numbers.
Experiment Setup Yes For Integrated Gradients, the n steps parameter was set to 200. In the IBA and Input IBA methods, the βfeat parameter was set to 10. Additionally, for handling the second bottleneck parameter Λ in Input IBA, we set βinput to 20 and performed 60 iterations. For HSIC-Attribution, the grid size was set to 7.