DyCAST: Learning Dynamic Causal Structure from Time Series

Authors: Yue Cheng, Bochen Lyu, Weiwei Xing, Zhanxing Zhu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the efficacy of Dy CAST through extensive experiments on both synthetic and real-world datasets. For synthetic data, we perform a series of simulation experiments with known ground truth. To demonstrate the broad applicability of our method, we apply it to two real-world datasets: Net Sim (Smith et al., 2011) and Causal Time (Cheng et al., 2024b). Additional experiments on variable counts, sequence lengths, and noise robustness, along with ablation studies, are provided in Appendix C.
Researcher Affiliation Collaboration 1Beijing Jiaotong University 2Data Canvas 3University of Southampton
Pseudocode No The paper describes the methodology using mathematical equations and prose, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing code or a link to a source code repository for the described methodology.
Open Datasets Yes We evaluate the efficacy of Dy CAST through extensive experiments on both synthetic and real-world datasets. ... we apply it to two real-world datasets: Net Sim (Smith et al., 2011) and Causal Time (Cheng et al., 2024b). ... We also demonstrate Dy CAST s ability to detect dynamic nonlinear interactions using the Human3.6M dataset, detailed in Appendix E. ... In this section, we apply Dy CAST to the DREAM-3 network inference challenge... (Prill et al., 2010).
Dataset Splits No The paper describes the generation process for synthetic data, specifying parameters like sample size (N=500), time steps (T=7), and variables (d=5). For real-world datasets like Net Sim and Causal Time, it mentions their characteristics (e.g., '50 independent time series recordings for d = {5, 10, 15} nodes over 200 time steps' for Net Sim) but does not provide specific train/test/validation splits for any of the datasets used.
Hardware Specification Yes We run all experiments on an NVIDIA Ge Force RTX 4090 GPU.
Software Dependencies No The paper mentions using the Adam algorithm (Kingma & Ba, 2015) but does not provide specific version numbers for any software libraries, programming languages, or other dependencies.
Experiment Setup Yes We show the detailed settings of hyper-parameters including learning rate, hidden dimensions, and stable matrix scale factor. ... Table 4: Detailed hyper-parameter settings of all networks on all datasets. ... For the sparsity terms λ1 and λ2, we adopt the values reported in the DYNOTEARS (Pamfil et al., 2020) to ensure a fair comparison. For the remaining parameters, r and γ, which are specific to Dy CAST, we conducted experiments on the simulated dataset, using the F1 score as the evaluation metric.