Structured Optimal Variational Inference for Dynamic Latent Space Models
Authors: Peng Zhao, Anirban Bhattacharya, Debdeep Pati, Bani K. Mallick
JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulations and real data analysis demonstrate the efficacy of our methodology and the efficiency of our algorithm. |
| Researcher Affiliation | Academia | Peng Zhao EMAIL Department of Applied Economics and Statistics University of Delaware Newark, DE 19716, USA Anirban Bhattacharya EMAIL Debdeep Pati EMAIL Bani K. Mallick EMAIL Department of Statistics Texas A&M University College Station, TX 77843, USA |
| Pseudocode | No | The paper describes the computational steps for SMF in prose and mathematical equations in Section 3.2, but does not present them in a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Reproducible implementations and experiments are publicly available at https://github. com/pengzhaostat/SMF-structured-variational-inference. |
| Open Datasets | Yes | Using the Enron email data set (Klimt and Yang, 2004), we compare our model with the latent space model with the same likelihood but with an inverse Gamma prior on the transition variance. Mc Farland s streaming classroom data set provides interactions of conversation turns from streaming observations of a class observed by Daniel Mc Farland in 1996 (Mc Farland, 2001). The data set is available in the R package networkDynamic (Butts et al., 2020). |
| Dataset Splits | Yes | With the dynamic networks, we consider all the edges to be missed with probability p = 0.01, 0.02, ..., 0.1 independently, train the two latent space models without the missed data, and then make predictions based on the missed data. For t = 3, 4, ..., 8, the first t 1 networks are used as the training data, while the t-th network is used as the test data. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware (e.g., GPU/CPU models, processors) used for running the experiments. |
| Software Dependencies | Yes | The R package Bessel (Maechler, 2019) is mentioned for calculating the modified Bessel function. The R package networkDynamic (Butts et al., 2020) is mentioned for the Mc Farland Classroom data set. The ndtv: Network Dynamic Temporal Visualizations, Bender-de Moll and Morris, 2021. URL https://CRAN.R-project.org/package=ndtv. R package version 0.13.1. |
| Experiment Setup | Yes | Throughout all simulation and real data analyses, we fix the fractional power α = 0.95. We also fix the hyperparameters aσ0 = 1/2, bσ0 = 1/2 and cτ = 1, dτ = 1/2 whenever the prior (3) is used. We set the transition smoothness τ = 0.01, 0.05, 0.1, sample size n = 10, 20, 50, time point T = 100, and correlation ρ = 0.5. The stopping criterion is taken to be the difference between training AUCs (area under the curve) in two consecutive cycles not exceeding 0.01. We ran MCMC with 100, 200 and 5000 iterations using a Gibbs Sampler algorithm, where each coefficient was sampled from its full conditional distribution. For the MCMC chain, we discarded the first half of iterations as burn-in and used the sample means from the last half of iterations to calculate the estimator. |