Bayesian Spiked Laplacian Graphs

Authors: Leo L Duan, George Michailidis, Mingzhou Ding

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the performance of the methodology on synthetic data sets, as well as a neuroscience study related to brain activity in working memory. Keywords: Isoperimetric Constant, Mixed-Effect Eigendecomposition, Normalized Graph Cut, Stiefel Manifold. Section 5 evaluates the model performance based on synthetic data, while Section 6 illustrates the modeling approach in a data application aiming to characterize the heterogeneity in brain scans in a human working memory study.
Researcher Affiliation Academia Leo L Duan EMAIL Department of Statistics, University of Florida George Michailidis EMAIL Department of Statistics, University of Florida Mingzhou Ding EMAIL Department of Biomedical Engineering, University of Florida
Pseudocode Yes Algorithm 1 Sign-based κ-partitioning. Initialize: V[1]1 = {1, . . . , n}, re-order { qk}T k=1 according to non-descending order of λk, denoted by { q(k)}T k=1. for k = 1 to (κ 1) do 1. Compute the sign-based partitioning loss when dividing the [k]lth existing partition, for l = 1, . . . , k: loss[k]l = X q(k)(i)q(k)(j) 1 q(k)(i)q(k)(j) < 0 .
Open Source Code Yes The software implementation can be found on https://github.com/leoduan/Bayes Spiked Laplacian.
Open Datasets No Section 6. Data Application: Characterizing Heterogeneity in a Human Working Memory Study. We employ the proposed spiked graph Laplacian model on data obtained from a neuroscience study on working memory, focusing on human brain functional connectivity (Hu et al., 2019).
Dataset Splits No The paper describes the generation of synthetic data (e.g., "We generate a weighted graph comprising of 60 vertices and three communities of size 10, 20 and 30 vertices") but does not specify how these, or the real-world neuroscience data, were split into training, validation, or test sets for experimental evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models, memory, or computational resources.
Software Dependencies No The paper mentions a software implementation on GitHub (https://github.com/leoduan/Bayes Spiked Laplacian) but does not list specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow).
Experiment Setup Yes To simplify computations, we approximate the Dirichlet process mixture model with a truncated version, setting the number of mixture components to g and using Dir(α0/g, . . . , α0/g) (in this paper, we use g = 30). The results obtained are based on an MCMC run of 30, 000 steps, with the first 10, 000 used as the burn-in period.