Boosting Causal Additive Models
Authors: Maximilian Kertel, Nadja Klein
JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our simulation study supports the theoretical findings in low-dimensional settings and demonstrates that our high-dimensional adaptation is competitive with state-of-the-art methods. In addition, it exhibits robustness with respect to the choice of hyperparameters, thereby simplifying the tuning process. Section 5 provides an empirical evaluation of our method and benchmarks its performance to various state-of-the-art algorithms. |
| Researcher Affiliation | Collaboration | Maximilian Kertel EMAIL Technology Development Battery Cell BMW Group Munich, Germany Nadja Klein EMAIL Scientific Computing Center Karlsruhe Institute of Technology Karlsruhe, Germany |
| Pseudocode | Yes | The algorithm using the AIC is outlined in Algorithm 1 in Appendix D. |
| Open Source Code | Yes | The code and the data-generation procedure are publicly available at https://github.com/mkrtl/Boosting DAGs. |
| Open Datasets | Yes | The code and the data-generation procedure are publicly available at https://github.com/mkrtl/Boosting DAGs. |
| Dataset Splits | Yes | Hereby, we split the data into a train and a hold-out set of the same size and monitor the mean squared error (MSE) on the hold-out set. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models, memory, or cloud instances) are mentioned in the paper's experimental setup or simulation study sections. |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies used in the experiments, such as programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | Throughout, we set the step size ν = 0.3 and the penalty parameter λ = 0.01. While it is known that boosting is commonly robust with respect to the step size (as long as it is small enough), we find in Section 5.3.2, that our method DAGBoost is also robust against the specific choice of λ. We therefore refrain from further tuning. |