Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Regional Tree Regularization for Interpretability in Deep Neural Networks

Authors: Mike Wu, Sonali Parbhoo, Michael Hughes, Ryan Kindle, Leo Celi, Maurizio Zazzi, Volker Roth, Finale Doshi-Velez6413-6421

AAAI 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Across many datasets, including two healthcare applications, we show our approach delivers simpler explanations than other regularization schemes without compromising accuracy. Specifically, our regional regularizer finds many more desirable optima compared to global analogues.
Researcher Affiliation Collaboration Mike Wu,1 Sonali Parbhoo,2,3 Michael C. Hughes,4 Ryan Kindle,5 Leo Celi,6 Maurizio Zazzi,7 Volker Roth,2 Finale Doshi-Velez3 1Stanford University, EMAIL 2University of Basel, EMAIL 3Harvard University SEAS, EMAIL 4Tufts University, EMAIL 5Massachusetts General Hospital, EMAIL 6Massachusetts Institute of Technology, EMAIL 7University of Siena, EMAIL
Pseudocode Yes Algorithm 1 APL (Wu et al., 2018)
Open Source Code Yes Py Torch implementation is available at https://github.com/mhw32/regional-tree-regularizer-public.
Open Datasets Yes We now apply regional tree regularization to four datasets from the UC Irvine repository (Dheeru and Karra Taniskidou, 2017). [...] The critical care task, performed with the MIMIC dataset (Johnson et al., 2016), [...] The HIV task, performed with the EUResist dataset (Zazzi et al., 2011)
Dataset Splits Yes convergence is measured by APL and accuracy on a validation set that does not change for at least 10 epochs
Hardware Specification No Computations were supported by the FAS Research Computing Group at Harvard and sci CORE (http://scicore.unibas.ch/) scientific computing core facility at University of Basel. This describes the facility, not specific hardware components like CPU/GPU models or memory.
Software Dependencies No The paper mentions software like 'Scikit-Learn' and 'Py Torch' but does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes We train each regularizer with an exhaustive set of strengths: λ = 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0, 2.0, 5.0, 10.0. Three runs with different random seeds were used to avoid local optima.