Explainable Neural Networks with Guarantee: A Sparse Estimation Approach

Authors: Antoine Ledent, Peng Liu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present the experimental results on both synthetic and real data. We allocate 20% of the data to the test set in all experiments, with the hyperparameters tuned by cross-validation. Our evaluation of the proposed algorithm serves a dual purpose: first, to gauge its predictive power, and second, to assess its capability to retrieve the sparse set of true features accurately.
Researcher Affiliation Academia Antoine Ledent and Peng Liu* Singapore Management University EMAIL, EMAIL
Pseudocode No The paper describes the methodology in prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct link to a source code repository, nor does it explicitly state that the code for the methodology described is being released or available in supplementary materials.
Open Datasets Yes Finally, we evaluate Spar Xnet on six real-life datasets, including adult income, breast cancer, credit risk, customer churn, heart disease, and recidivism.
Dataset Splits Yes We allocate 20% of the data to the test set in all experiments, with the hyperparameters tuned by cross-validation.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers (e.g., libraries, frameworks, or programming languages with their versions) used in the experiments.
Experiment Setup Yes We use one pathway with six fully connected layers to learn the underlying data-generating process and identify the true feature. Each hidden layer consists of 128 nodes, followed by a dropout layer. We use Bayesian optimization to optimize three hyperparameters: dropout rate (between 0.1 and 0.5), learning rate (between 0.001 and 0.01), and temperature (between 0.1 and 100). The temperature is then slowly reduced to 1% of its initial value throughout a total training budget of 2000 iterations.