Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
A Learning Theoretic Perspective on Local Explainability
Authors: Jeffrey Li, Vaishnavh Nagarajan, Gregory Plumb, Ameet Talwalkar
ICLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we validate our theoretical results empirically and show that they reflect what can be seen in practice. We verify empirically on UCI Regression datasets that our results non-trivially reflect the two types of generalization in practice. |
| Researcher Affiliation | Collaboration | Jeffrey Li University of Washington EMAIL Vaishnavh Nagarajan , Gregory Plumb Carnegie Mellon University EMAIL Ameet Talwalkar Carnegie Mellon University & Determined AI |
| Pseudocode | No | The paper describes algorithmic procedures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | For both experiments, we use several regression datasets from the UCI collection (Dua & Graff, 2017) |
| Dataset Splits | Yes | Specifically, we split the original test data into two halves, using only the first half for explanation training and the second for explanation testing. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using neural networks and linear models but does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper states that neural networks were trained 'with the same setup as in (Plumb et al., 2020)' and mentions using 'linear models' and 'empirical MNF minimizer', but does not provide specific hyperparameter values or detailed training configurations within the main text. |