Covariate-dependent Graphical Model Estimation via Neural Networks with Statistical Guarantees

Authors: Jiahe Lin, Yikai Zhang, George Michailidis

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of the proposed method is evaluated on several synthetic data settings and benchmarked against existing approaches. The method is further illustrated on real datasets involving data from neuroscience and finance, respectively, and produces interpretable results. [...] 4 Synthetic Data Experiments [...] 5 Real Data Experiments
Researcher Affiliation Collaboration Yikai Zhang, Machine Learning Research, Morgan Stanley; George Michailidis, Department of Statistics and Data Science University of California, Los Angeles. E-mail: EMAIL.
Pseudocode Yes Exhibit 1: DNN-based Covariate-dependent Graphical Model (DNN-CGM) Learning Pipeline
Open Source Code Yes The code repository containing all the implementation is available at https://github.com/George Michailidis/covariate-dependent-graphical-model.
Open Datasets Yes We consider a dataset from the Human Connectome Project analyzed in Lee et al. (2023) comprising of resting-state f MRI scans for 549 subjects. [...] The S&P 100 Index constituent dataset can be collected from Yahoo!Finance, with the list of tickers corresponding to the constituents available through Wikipedia.
Dataset Splits Yes For all settings, we train the model with β( ) parameterized with an MLP8 on 10,000 samples, and evaluate it on a test set of size 1000; [...] Data is further split into train/val/test periods that respectively span 2001-2017, 2018-2019, 2020 onwards.
Hardware Specification Yes All experiments are done on a NVIDIA RTX A5000 GPU.
Software Dependencies No The paper mentions relying on 'R package huge' and 'Py Torch' but does not provide specific version numbers for these software components. For instance, it does not state 'PyTorch 1.9' or 'R package huge 1.2.3'.
Experiment Setup Yes Table 7: hyper-parameters for the MLPs and model training. hidden layer size/dropout learning rate scheduler type scheduler stepsize(milestones) / decay epochs. G1 [128, 64], [128] / 0.3 0.0005 Step LR 20/0.25 50.