AND/OR Search for Marginal MAP

Authors: Radu Marinescu, Junkyu Lee, Rina Dechter, Alexander Ihler

JAIR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical evaluations conclude that a class of solvers that interleaves depth-first with best-first schemes emerges as most competitive anytime scheme. [...] Through extensive empirical evaluations on a variety of benchmarks we demonstrate the effectiveness of these algorithms as exact schemes against previous unconstrained join-tree based methods, which we also extend to be sensitive to high induced-width models.
Researcher Affiliation Collaboration Radu Marinescu EMAIL IBM Research Dublin, Ireland Junkyu Lee EMAIL Rina Dechter EMAIL Alexander Ihler EMAIL University of California, Irvine Irvine, CA 92697, USA
Pseudocode Yes Algorithm 1: WMB-MM(i) for marginal MAP [...] Algorithm 10: AAOBF for marginal MAP
Open Source Code No The paper states: "All competing algorithms were implemented in C++", but does not provide any concrete access information such as a repository link, explicit code release statement, or mention of code in supplementary materials for the methodology described.
Open Datasets Yes The benchmark problem sets are derived from the Pascal2 competition problems (Elidan et al., 2012). They include (1) the grid domain which is a collection of random grid networks, (2) the pedigree domain which is a collection of genetic linkage analysis problems, and (3) the promedas domain which is a collection of Markov networks modeling an expert system for medical diagnosis tasks. [...] PASCAL 2011 probabilistic inference challenge. http://www.cs.huji.ac.il/project/PASCAL/.
Dataset Splits Yes For each problem instance, we selected m variables and marked them as MAP variables while the remaining ones were marked as summation variables, respectively. Specifically, we generated two types of marginal MAP instances with m MAP variables for each network, easy and hard. The easy instances were generated by selecting the first m variables from a breadth-first traversal of a pseudo-tree obtained from a join tree decomposition of the primal graph and the hard instances were generated by selecting the MAP variables uniformly at random. [...] In our experiments, m is selected to be equal to 50% of the variables (see also Table 3 for more details).
Hardware Specification Yes All competing algorithms were implemented in C++, and the experiments were conducted on a cluster with 27 computing nodes. Each node has a dual-core 2.67 GHz Intel Xeon X5650 CPU with 24 GB of RAM and was operated by a 64-bit Linux operating system.
Software Dependencies No The paper mentions the implementation language (C++) and operating system (Linux) but does not provide specific version numbers for compilers, libraries, or any other key software components used in the experiments.
Experiment Setup Yes The hyperparameters of BRAOBB and RBFAOO are given as follows: we set the rotation limit to 1,000 for BRAOBB and the overestimation parameter to 1.0 for RBFAOO, respectively. [...] The initial weight α0 used by the weighted best-first AND/OR search algorithms WAOBF, WAOBF-REP and WRBFAOO is 64, while the weight update schedule is αi+1 = αi. The cutoff parameter of LAOBF is 1000.