Testing Causal Models with Hidden Variables in Polynomial Delay via Conditional Independencies

Authors: Hyunchai Jeong, Adiba Ejaz, Jin Tian, Elias Bareinboim

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world and synthetic data demonstrate the practicality of our algorithm. In this section, we first demonstrate the runtime of LISTCI on benchmark DAGs of up to 100 nodes from the bnlearn repository (Scutari 2010). Next, we apply LISTCI to model testing on a real-world protein signaling dataset with an expert-provided graph (Sachs et al. 2005). Third, we provide analysis of the total number of non-vacuous CIs invoked by C-LMP, using LISTCI for the analysis.
Researcher Affiliation Academia Hyunchai Jeong*1, Adiba Ejaz*2, Jin Tian3, Elias Bareinboim2 1Purdue University 2Columbia University 3Mohamed bin Zayed University of Artificial Intelligence EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: LISTCI (G, V ) Algorithm 2: LISTCIX (GV X, X, V X, I, R) Algorithm 3: FINDAAC (GV X, X, V X, I, R)
Open Source Code Yes Code https://github.com/Causal AILab/ List Conditional Independencies
Open Datasets Yes Experiments with synthetic data and a real-world protein signaling dataset (Sachs et al. 2005)
Dataset Splits No For each U, we generated 10 random samples. The dataset (853 samples) comes with an expert-provided ground-truth DAG. Explanation: The paper mentions generating random samples and using a dataset with a given sample count but does not specify how these samples are split into training, validation, or test sets, nor does it provide details about any specific splitting methodology.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running experiments.
Software Dependencies No We use a kernel-based CI test from the causal-learn package (Zheng et al. 2024) with p-value p = 0.05 (for the null hypothesis of dependence). Explanation: While software packages such as 'causal-learn' and 'bnlearn' are mentioned, specific version numbers for these or any other ancillary software dependencies are not provided in the paper.
Experiment Setup Yes For our chosen topological order, seven out of ten CIs invoked by C-LMP resulted in p > 0.05. This suggests the ground-truth DAG may need revision before use as a benchmark for structure learning. The exact local CIs that are violated may guide experts in this revision process. Experiment 2 (Application to model testing). A real-world protein signaling dataset (Sachs et al. 2005) has been used to benchmark causal discovery methods (Cundy, Grover, and Ermon 2021; Zantedeschi et al. 2023). The dataset (853 samples) comes with an expert-provided ground-truth DAG (11 nodes, 16 edges). Using LISTCI, we test to what extent this graph is compatible with the available data. We use a kernel-based CI test from the causal-learn package (Zheng et al. 2024) with p-value p = 0.05 (for the null hypothesis of dependence).