Towards Attributions of Input Variables in a Coalition
Authors: Xinhao Zheng, Huiqi Deng, Quanshi Zhang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic data, NLP, image classification, and the game of Go validate our approach, demonstrating consistency with human intuition and practical applicability. |
| Researcher Affiliation | Academia | 1Shanghai Jiao Tong University. Correspondence to: Quanshi Zhang <EMAIL>. |
| Pseudocode | No | The paper includes theorems and mathematical formulations, but no clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured steps formatted like code. |
| Open Source Code | No | The paper mentions the Kata Go model as an 'open-source Go engine' but does not state that the authors are releasing their own source code for the methodology described in this paper. |
| Open Datasets | Yes | We finetuned the pre-trained BERT-large (Devlin et al., 2018) and LLa MA (Touvron et al., 2023) model on SST-2 dataset (Socher et al., 2013) for sentiment classification. ... We conducted experiments on VGG-11 (Simonyan & Zisserman, 2014) and Res Net-20 (He et al., 2016) on the MNIST (Le Cun et al., 1998) and CIFAR-10 (Krizhevsky, 2012) datasets |
| Dataset Splits | No | The paper mentions using SST-2, MNIST, and CIFAR-10 datasets, but it does not specify the training, validation, or test splits (e.g., percentages or exact counts) used for these datasets within the provided text. It refers to Appendix I for experimental settings, but this section is not provided in the extracted text. |
| Hardware Specification | No | The paper discusses training various DNNs (BERT-large, LLa MA, VGG-11, ResNet-20) and performing experiments, but it does not provide specific details about the hardware used, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions several models and frameworks like BERT-large, LLa MA, VGG-11, ResNet-20, and Kata Go, along with their respective citations. However, it does not specify the version numbers for these software components or any other ancillary software dependencies required to reproduce the experiments. |
| Experiment Setup | No | The paper states it 'finetuned' models and 'trained' DNNs, but it does not explicitly provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. It refers to Appendix I for experimental settings, but this section is not provided in the extracted text. |