A VAE-based Framework for Learning Multi-Level Neural Granger-Causal Connectivity

Authors: Jiahe Lin, Huitian Lei, George Michailidis

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of the proposed framework is evaluated on several synthetic data settings and benchmarked against existing approaches designed for individual system learning. The method is further illustrated on a real dataset involving time series data from a neurophysiological experiment and produces interpretable results.
Researcher Affiliation Collaboration Jiahe Lin Machine Learning Research, Morgan Stanley Huitian Lei Lyft, Inc. George Michailidis Department of Statistics and Data Science University of California, Los Angeles
Pseudocode Yes Exhibit 1: Outline of steps for training under the two-layer VAE-based framework
Open Source Code Yes The code repository is available at https://github.com/georgemichailidis/vae-multi-level-neural-GC-official.
Open Datasets Yes The multi-subject EEG dataset is available at https://dataverse.tdl.org/dataverse/rsed2017, as provided by Trujillo et al. (2017).
Dataset Splits Yes For both datasets, we restrict the analysis to entities that have at least 40000 observations (total number of time points)11, and the whole trajectory is further partitioned into training/validation data, with the latter having 2000 time points.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as GPU/CPU models or memory amounts.
Software Dependencies No The paper mentions various software components and concepts like neural networks, MLPs, LSTMs, and GNNs, and a GitHub repository for the code (which implies certain dependencies like PyTorch). However, it does not provide specific version numbers for any key software components or libraries within the text of the paper.
Experiment Setup Yes Input: observed trajectories {x[1], , x[M]}, hyperparameters. ... all the MLP blocks used in the Trajectory2Graph operations are kept simple with only one single sub-block; the hidden dimension is set at 128 or 256, depending on the exact experiments. ... we use the same set of hyper-parameters as the ones in earlier experiments with much larger sample sizes (1e4).