Fast Bayesian Inference of Sparse Networks with Automatic Sparsity Determination

Authors: Hang Yu, Songwei Wu, Luyin Xin, Justin Dauwels

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical results show that BISN can achieve comparable or better performance than the state-of-the-art methods in terms of structure recovery, and yet its computational time is several orders of magnitude shorter, especially for large dimensions. We validate the proposed approach through synthetic and real data in Section 6. In this section, we benchmark the proposed BISN method with several start-of-the-art methods for automatic graphical model selection.
Researcher Affiliation Academia Hang Yu EMAIL School of Electrical and Electronic Engineering Nanyang Technological University 50 Nanyang Avenue, 639798 Singapore Songwei Wu EMAIL School of Electrical and Electronic Engineering Nanyang Technological University 50 Nanyang Avenue, 639798 Singapore Luyin Xin EMAIL School of Physical and Mathematical Sciences Nanyang Technological University 50 Nanyang Avenue, 639798 Singapore Justin Dauwels EMAIL School of Electrical and Electronic Engineering Nanyang Technological University 50 Nanyang Avenue, 639798 Singapore
Pseudocode No The paper describes the proposed method, including derivations for the variational Bayes algorithm and update rules (equations 24-30, 56-58), but it does not present these steps in a dedicated, structured pseudocode block or algorithm box.
Open Source Code Yes The MATLAB C++ MEX code of BISN is available at https://github.com/fhlyhv/BISN. The major part of the code is implemented using the Armadillo C++ template library (Sanderson and Curtin, 2016)
Open Datasets Yes In this section, we exploit BISN to learn gene regulatory networks form the Rosetta Inpharmatics Compendium of gene expression profiles (Hughes et al., 2000). Here we consider the f MRI-based mind-state classification problem described in Mitchell et al. (2004).
Dataset Splits Yes In order to check how the financial network changes during the financial crisis, we partition the data into three parts: pre-crisis (2006-2007), crisis (2008-2009), and post-crisis (2010-2011), according to the Federal Reserve Bank of St. Louis Financial Crisis Timeline. We apply leave-one-out cross validation to test the performance of the classifier based on graph structure and show the resulting classification accuracy in the second row in Table 10.
Hardware Specification No The paper discusses computational time and complexity extensively, comparing how different methods scale with problem dimension. However, it does not specify any particular hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies Yes The MATLAB C++ MEX code of BISN is available at https://github.com/fhlyhv/BISN. The major part of the code is implemented using the Armadillo C++ template library (Sanderson and Curtin, 2016)
Experiment Setup Yes For BISN, we set the minibatch size s = p/(0.001(p - 1) + 1) and the decaying coefficient r = 0.5. We further calculate the increased percentage of the inter-sector edges for each sector during the financial crisis by comparing the seventh row with the fourth row in Table 6, and then sort all sectors in the descending order of the increased percentage in Table 7. Next, we employ BISN to learn a brain network for each of the 40 experiments and for each of the five selected regions individually, and use the network structure to train a random forest (RF) classifier in order to distinguish between the sentence and the picture stimulus.