Extendable and Iterative Structure Learning Strategy for Bayesian Networks
Authors: Hamid Kalantari, Russell Greiner, Pouria Ramazi
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations demonstrate runtime reductions of up to 1300 without compromising accuracy. Building on this approach, we introduce a novel iterative paradigm for structure learning over X. This method achieves runtime advantages compared to common algorithms while maintaining similar accuracy. Our contributions provide a scalable solution for Bayesian network structure learning, enabling efficient model updates in real-time and high-dimensional settings. We now compare the results of our Extendable PC algorithm with the PC, and Initialized PC algorithms on the data sets ASIA (Lauritzen & Spiegelhalter, 1988), CANCER (Korb & Nicholson, 2010), SURVEY (Scutari & Denis, 2021), EARTHQUAKE (Korb & Nicholson, 2010), ALARM (Beinlich et al., 1989), INSURANCE (Binder et al., 1997), CHILD (Spiegelhalter & Cowell, 1992), WATER (Jensen et al., 1989), SACHS (Jensen & Jensen, 2013), MILDEW (Jensen & Jensen, 2013), WIN95PTS (Jensen & Jensen, 2013), HEPAR2 (Onisko, 2003), and ANDES (Conati et al., 1997). For each task, we first draw 10000 instances from distributions for use in structure learning algorithms. The number of CI tests for the PC, Initialized PC, and Extendable PC algorithms are shown in Table 2 and the runtime in Table 3. In addition, by considering the structural Hamming distance, we recorded the number of incorrect edges either missing or extra compared to the true graph and divided it by the total number of edges in the true DAG (Table 4). |
| Researcher Affiliation | Academia | Hamid Kalantari & Russell Greiner Department of Computing Science University of Alberta Edmonton, Alberta T6G 2R3, Canada EMAIL Pouria Ramazi Department of Mathematics & Statistics Brock University St. Catharines, ON L2S 3A1, Canada EMAIL |
| Pseudocode | Yes | Algorithm 1: The Extendable PC Algorithm Algorithm 2: The Extendable Score-based Algorithm Algorithm 3: The Iterative P-map learner Algorithm Algorithm 4: The Extendable Constraint-based Algorithm Algorithm 5: The Iterative PC Algorithm Algorithm 6: The PC Algorithm Algorithm 7: The Extendable exhaustive search structure learning algorithm Algorithm 8: T-function (search space trimming) |
| Open Source Code | No | The paper does not provide any explicit statement about releasing code, a link to a code repository, or mention of code in supplementary materials. |
| Open Datasets | Yes | We now compare the results of our Extendable PC algorithm with the PC, and Initialized PC algorithms on the data sets ASIA (Lauritzen & Spiegelhalter, 1988), CANCER (Korb & Nicholson, 2010), SURVEY (Scutari & Denis, 2021), EARTHQUAKE (Korb & Nicholson, 2010), ALARM (Beinlich et al., 1989), INSURANCE (Binder et al., 1997), CHILD (Spiegelhalter & Cowell, 1992), WATER (Jensen et al., 1989), SACHS (Jensen & Jensen, 2013), MILDEW (Jensen & Jensen, 2013), WIN95PTS (Jensen & Jensen, 2013), HEPAR2 (Onisko, 2003), and ANDES (Conati et al., 1997). |
| Dataset Splits | No | For each task, we first draw 10000 instances from distributions for use in structure learning algorithms. The paper mentions drawing 10000 instances for structure learning but does not specify any training, validation, or test splits for these instances. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | No | The paper describes the algorithms and presents numerical results based on running these algorithms on various datasets. However, it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, specific configurations for the PC algorithm beyond its general description), model initialization, or training schedules that would be needed for reproduction. |