Rate-Agnostic (Causal) Structure Learning
Authors: Sergey Plis, David Danks, Cynthia Freeman, Vince Calhoun
NeurIPS 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply these algorithms to data from simulations to gain insight into the challenge of undersampling. We finish in Section 4 by exploring their performance on synthetic data. |
| Researcher Affiliation | Academia | Sergey Plis The Mind Research Network, Albuquerque, NM EMAIL David Danks Carnegie-Mellon University Pittsburgh, PA EMAIL Cynthia Freeman The Mind Research Network, CS Dept., University of New Mexico Albuquerque, NM EMAIL Vince Calhoun The Mind Research Network ECE Dept., University of New Mexico Albuquerque, NM EMAIL |
| Pseudocode | Yes | a: RASLre algorithm |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper uses 'synthetic data' and 'simulated graphs' that were generated for the experiments (e.g., 'generated 100 random G1'), but it does not refer to a publicly available dataset nor provides access information for one. |
| Dataset Splits | No | The paper discusses the use of synthetic and simulated data but does not specify explicit training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not specify any hardware details such as CPU/GPU models, memory, or specific computing environments used for running the experiments. |
| Software Dependencies | No | The paper mentions SVAR and VAR models, referencing a book, but does not specify any software names with version numbers for implementation details. |
| Experiment Setup | No | The paper describes aspects of data generation (e.g., 'generated a random transition matrix A by sampling weights... and controlling system stability'), but it does not provide specific experimental setup details such as hyperparameters for the SVAR optimization or other model-specific training settings. |