Bayesian Active Learning for Bivariate Causal Discovery

Authors: Yuxuan Wang, Mingzhou Liu, Xinwei Sun, Wei Wang, Yizhou Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on bivariate systems, tree-structured graphs, and an embodied AI environment demonstrate the effectiveness of our framework in direction determination and its extensibility to both multivariate settings and real-world applications.
Researcher Affiliation Academia 1School of Computer Science, Peking University, Beijing, China 2School of Data Science, Fudan University, Shanghai, China 3State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing, China 4 School of Computer Science, Inst. for Artificial Intelligence, State Key Laboratory of General Artificial Intelligence, Peking University, Beijing, China.
Pseudocode Yes Algorithm 1 Bayesian active intervention; Algorithm 2 Dynamic Programming for Optimal Intervention Design
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code, nor does it include a link to a code repository.
Open Datasets No Data generation for bivariate causal discovery, tree-structured causal graph learning, and causal reasoning in embodied AI explicitly states that the authors generated their own data and experimental environments, without providing access information for a public dataset.
Dataset Splits No The paper describes data generation processes and evaluations based on multiple replications or simulations (e.g., '100 replications under H0 and H1', 'randomly sample 200 trees', 'repeat the generation process 100 times'), but does not provide specific training/test/validation dataset splits in the conventional sense for a fixed dataset.
Hardware Specification Yes The switch-light reasoning task is implemented on the Tong Sim (Peng et al., 2024) engine running on a server with NVIDIA 2080-Ti GPUs.
Software Dependencies No The paper mentions using the Adam optimizer and the Tong Sim engine, but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes We set the evidence levels in (1b) and (1c) to k0 = 10 and k1 = 1/10, respectively. For Alg. 1, we set the total budge to B = 100 and the number of observational samples to |Dobs| = 1000. For the optimization over continuous variables, we employ the Adam optimizer with a learning rate of 0.1 and a total iterations of 4000 steps. We decay the learning rate to 0.001 after the first 200 steps.