Exact Algorithms for MRE Inference

Authors: Xiaoyuan Zhu, Changhe Yuan

JAIR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical evaluations show that the proposed BFBn B algorithms make exact MRE inference tractable in Bayesian networks that could not be solved previously. [...] 6. Experiments
Researcher Affiliation Academia Xiaoyuan Zhu EMAIL Changhe Yuan EMAIL Queens College, City University of New York 65-30 Kissena Blvd., Queens, NY 11367
Pseudocode Yes Algorithm 1 Compiling Minimal Target Blanket Decomposition [...] Algorithm 2 Merging Target Blankets [...] Algorithm 3 Splitting Target Blankets [...] Algorithm 4 Compile Belief Ratio Tables [...] Algorithm 5 BFBn B Algorithm Based on Target Blanket Upper Bounds
Open Source Code No The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository.
Open Datasets Yes The proposed algorithms are evaluated on six benchmark diagnostic Bayesian networks listed in Table 1, i.e., Alarm (Ala), Carpo (Car), Hepar (Hep), Insurance (Ins), Emdec6h (Emd), and CPCS179 (Cpc) (Beinlich, Suermondt, Chavez, & Cooper, 1989; Binder, Koller, Russell, & Kanazawa, 1997; Onisko, 2003; Pradhan, Provan, Middleton, & Henrion, 1994).
Dataset Splits No In the 12-target setting, we randomly generated five test settings of each network, each setting consisting of all leaf nodes as evidence, 12 of the remaining nodes as targets, and others as auxiliary nodes. Then for each setting, we randomly generated 20 configurations of evidence (test cases) by sampling from the prior distributions of the networks. In the difficult-target setting, we randomly generated five test settings of each network, each setting consisting of all leaf nodes as evidence, around 20 of the remaining nodes as targets, and others as auxiliary nodes. The number of targets is selected so that the test cases are too challenging for BFBF but are still solvable by MPBnd and SPBnd. Then for each setting, we randomly generated 20 configurations of evidence (test cases) by sampling from the prior distributions of the networks. This describes how test cases are generated and target/evidence variables are selected, but not traditional training/validation/test dataset splits.
Hardware Specification Yes The experiments were performed on a 2.67GHz Intel Xeon CPU E7 with 512G RAM running a 3.7.10 Linux kernel.
Software Dependencies No The paper mentions "running a 3.7.10 Linux kernel" but does not specify any programming languages, libraries, or solvers with version numbers that are directly relevant to the methodology's implementation dependencies.
Experiment Setup Yes In MPBnd and SPBnd, we set the maximum number of targets in a target blanket K to be 18. In SPBnd, we set the maximum number of enclosed-targets in a target blanket N to be 7. [...] In tabu search, we set the number of search steps since the last improvement L and the maximum number of search steps M according to different network settings. In the 12-target setting, we set L to be 20 and M to be {400, 800, 1600, 3200, 6400}. In the difficult-target setting, we set L to be 80 and M to be {12800, 25600, 51200}.