Identification of Average Outcome under Interventions in Confounded Additive Noise Models
Authors: Muhammad Qasim Elahi, Mahsa Ghasemi, Murat Kocaoglu
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulation results indicate that our method can accurately estimate all AOIs in finite-sample settings, and we further demonstrate its practical significance using semi-synthetic data. |
| Researcher Affiliation | Academia | Muhammad Qasim Elahi EMAIL School of Electrical and Computer Engineering Purdue University Mahsa Ghasemi EMAIL School of Electrical and Computer Engineering Purdue University Murat Kocaoglu EMAIL Department of Computer Science Johns Hopkins University |
| Pseudocode | Yes | Algorithm 1: Learn the transitive closure of the graph given access to CI-testing and query access to sample causal model under any intervention i.e. Mdo(Si). Algorithm 2: Learn the observable graph: Accepts two parameters α and maximum graph degree dmax and outputs the observable sub-graph and sufficient interventional data sets for inference. |
| Open Source Code | Yes | The results of our experiment can be found at https://github.com/Qasim Elahi/Code-for-TMLR-paper-Identification-in-Confounded-Additive-Noise-Models. |
| Open Datasets | Yes | In order to demonstrate the effectiveness of our inference scheme, we evaluate it through a semi-synthetic experiment using the HEALTHCARE Bayesian network from bnlearn repository (Scutari, 2009). |
| Dataset Splits | No | The paper does not explicitly provide details about training/test/validation splits. It mentions using 'finite-sample settings' and 'sample size' for synthetic experiments, and the 'HEALTHCARE Bayesian network' for semi-synthetic data, but no specific split percentages, counts, or methodologies are described. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the 'bnlearn library repository' and refers to an 'R package' implicitly, but it does not specify any version numbers for these software components. |
| Experiment Setup | No | The paper describes generating 'randomly generated DAGs' and using 'fixed number of treatments (n = 4)' and various 'sample sizes' for synthetic experiments. For semi-synthetic data, it mentions 'multivariate Gaussian noise is added' and 'Gaussian distribution with a specific variance and mean'. However, it lacks specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, epochs), optimizer settings, or other system-level training configurations. |