A Gibbs Sampler for Learning DAGs
Authors: Robert J. B. Goudie, Sach Mukherjee
JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically examine the performance of the sampler using several simulated and real data examples. The proposed method gives robust results in diverse settings, outperforming several existing Bayesian and frequentist methods. |
| Researcher Affiliation | Academia | Robert J. B. Goudie EMAIL Medical Research Council Biostatistics Unit Cambridge CB2 0SR, UK Sach Mukherjee EMAIL German Centre for Neurodegenerative Diseases (DZNE) Bonn 53175, Germany |
| Pseudocode | Yes | Algorithm 1 A Gibbs sampler for learning DAGs, with blocks; Algorithm 2 A Gibbs sampler for learning DAGs, with general blocks |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It mentions using 'their implementation of the REV sampler' for comparison, but no statement or link about the authors' own code release. |
| Open Datasets | Yes | We applied the methods to the Zoo data set (Newman et al., 1998) that records p = 17 (discrete) characteristics of n = 101 animals. The ALARM network (Beinlich et al., 1989) consists of 37 discrete nodes and 46 edges and has been widely used in studying structure learning (e.g. Friedman and Koller, 2003; Grzegorczyk and Husmeier, 2008). We simulated data following a procedure described in Kalisch and B uhlmann (2007). The publicly available Behavioral Risk Factor Surveillance System Survey (BRFSS) (Centers for Disease Control and Prevention, 2008). We used single-cell molecular data from Bendall et al. (2011). |
| Dataset Splits | No | The paper mentions data sample sizes (e.g., n = 100, 500, 1000, 2500, 5000 for ALARM data) and bootstrapping for stability analysis, but does not provide specific training/test/validation dataset splits in the context of model evaluation or reproduction of experiments. The context is Bayesian structure learning, which often uses the entire dataset for posterior inference, and MCMC chain analysis for convergence. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'multinomial-Dirichlet (Heckerman et al., 1995)' and 'Normal with a g-prior (with g = n) (Geiger and Heckerman, 1994; Zellner, 1986)' for conjugate formulations, and implementing 'the algorithm introduced by King and Sagert (2002)'. However, it does not specify any software names with version numbers, such as programming languages, libraries, or solvers used for the implementation. |
| Experiment Setup | Yes | To reduce the computational costs of structure learning it is common to set a maximum in-degree (e.g. Friedman and Koller, 2003; Grzegorczyk and Husmeier, 2008). We set a maximum in-degree κ = 3 in all empirical examples, except where stated otherwise (Section 5). This facilitates sampling from the conditional distribution in Equation 1 by reducing the computational cost of evaluating the normalising constant. We set the block size q = |W| = 3. ... For the constraint-based methods, the significance level was α = 0.05 by default, but we also show some results for α = 0.00005, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.1. The Gibbs sampler we use is a random-scan sampler, with q = 3 (i.e. the parent sets of three nodes are sampled jointly at each iteration). ... In total, we drew 106 iterations of REV (retaining only every 10th iteration to reduce storage requirements). ... and drew 106 iterations of the Gibbs sampler (again retaining every 10th iteration). ... we performed 107 iterations of MC3 (retaining every 100th). For each sampler, 10 independent runs starting from different initial graphs were performed. We discarded the first 1/4 of samples. |