Penalized Maximum Likelihood Estimation of Multi-layered Gaussian Graphical Models
Authors: Jiahe Lin, Sumanta Basu, Moulinath Banerjee, George Michailidis
JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The performance of the maximum likelihood estimator is illustrated on synthetic data. ... In Section 4, we show the performance of the proposed algorithm with simulation results under different simulation settings, and introduce several acceleration techniques which speed up the convergence of the algorithm and reduce the computing time in practical settings. |
| Researcher Affiliation | Academia | Jiahe Lin EMAIL Department of Statistics University of Michigan Ann Arbor, MI 48109, USA; Sumanta Basu EMAIL Department of Statistics University of California, Berkeley Berkeley, CA 94720, USA; Moulinath Banerjee EMAIL Department of Statistics University of Michigan Ann Arbor, MI 48109, USA; George Michailidis EMAIL Department of Statistics and Computer & Information Science & Engineering University of Florida Gainesville, FL 32611, USA |
| Pseudocode | Yes | Algorithm 1: Computational procedure for estimating B and Θϵ |
| Open Source Code | No | The paper does not explicitly state that the authors are releasing their own code for the methodology described. It mentions using 'the default choice of tuning parameters suggested in the implementation of the code provided in Javanmard and Montanari (2014)' and an 'R package version 1.2.7' for related work, but not their specific implementation. |
| Open Datasets | No | The performance of the maximum likelihood estimator is illustrated on synthetic data. ... For the 2-layer network... For a 3-layer network, we consider the following data generation mechanism: for all three models A, B and C, each entry in BXY is nonzero with probability 5/p1... |
| Dataset Splits | No | The paper uses synthetic data generated according to specified models (e.g., Model A, B, C) and does not describe or specify any training/test/validation splits for these generated datasets. |
| Hardware Specification | No | The paper mentions 'distribute the computation on 8 cores' for parallelization, but does not provide specific details about the CPU models, GPU models, or other hardware specifications used for the experiments. |
| Software Dependencies | No | The paper mentions using the 'de-biased Lasso procedure proposed by Javanmard and Montanari (2014)' and the 'Graphical Lasso (Friedman et al., 2008)', but does not provide specific version numbers for the software implementations of these methods. It also cites an R package 'huge: High-dimensional undirected graph estimation, 2015. URL http://CRAN.R-project.org/package=huge. R package version 1.2.7' but it's not clear if this is a direct dependency for their proposed method's implementation or a tool used for comparison/background. |
| Experiment Setup | Yes | 3. In practice, for the debias Lasso procedure, we use the default choice of tuning parameters suggested in the implementation of the code provided in Javanmard and Montanari (2014); for FWER, we suggest using α = 0.1 as the thresholding level; for tuning parameter selection, we suggest doing a grid search for (λn, ρn) on [0, 0.5 √log p1/n] [0, 0.5 √log p2/n] with BIC. |