Towards Robustness and Explainability of Automatic Algorithm Selection
Authors: Xingyu Wu, Jibin Wu, Yu Zhou, Liang Feng, Kc Tan
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the superiority of DAG-AS in terms of accuracy, robustness, and explainability using the ASlib benchmark. Our analysis of the causal graph underscored the importance of considering algorithm features and causal mechanisms. In this study, we introduce a causal framework for algorithm selection, DAG-AS, which models the underlying mechanisms that determine algorithm suitability, addressing the limitations of existing methods. Experimental results on the ASlib benchmark demonstrate that our model outperforms traditional techniques in both robustness and explainability. |
| Researcher Affiliation | Academia | 1Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University, Hong Kong SAR, China 2Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR, China 3College of Computer Science, Chongqing University, Chongqing, China. Correspondence to: Jibin Wu <EMAIL>. |
| Pseudocode | No | The paper describes the model framework using equations (2)-(13) and provides a proof in Appendix B, but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The implementation of DAG-AS is available at https://github.com/wuxingyu-ai/DAG-AS. |
| Open Datasets | Yes | This study employs the ASlib (Algorithm Selection Library) benchmark (Bischl et al., 2016) to evaluate various algorithm selection methods, providing a unified dataset with problem instances from multiple domains and their corresponding algorithm performance data. As one of the most widely used benchmarks in the field of algorithm selection, ASlib encompasses problem instances from diverse domains along with their corresponding algorithm performance data. |
| Dataset Splits | Yes | each algorithm selection scenario was repeated 10 times. The training set consisted of 80% of the samples randomly selected from the dataset. The batch sizes for training and testing were set to 1000 and 100, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | The overall loss function is the weighted average of the causal learning loss and algorithm selection loss: L = αLreconstruction + βLsparsity + γLacyclicity + δLselection, (13) where α, β, γ, and δ are hyperparameters. In the first experiment, we analyzed the balance between the reconstruction loss and the two causal learning constraints by varying α and β, while fixing γ = 1. The second experiment focused on the balance between the causal learning loss and the algorithm selection loss by adjusting γ, with β = 1 and α = 0.0001 kept constant. The batch sizes for training and testing were set to 1000 and 100, respectively. |