Causal Discovery from Conditionally Stationary Time Series

Authors: Carles Balsells-Rodas, Xavier Sumba, Tanmayee Narendra, Ruibo Tu, Gabriele Schweikert, Hedvig Kjellstrom, Yingzhen Li

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical experiments on nonlinear particle interaction data and gene regulatory networks demonstrate SDCI s superior performance over baseline causal discovery methods. Improved results over non-causal RNNs on modeling NBA player movements demonstrate the potential of our method and motivate the use of causality-driven methods for forecasting.
Researcher Affiliation Academia 1Imperial College London 2University of Dundee 3KTH Royal Institute of Technology. Correspondence to: Carles Balsells-Rodas <EMAIL>.
Pseudocode No The paper describes the methods and algorithms in text and mathematical formulations but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code for our experiments is available at https://github.com/charlio23/SDCI.
Open Datasets Yes We evaluate SCDI on spring data adapted from Kipf et al. (2018); L owe et al. (2022)... We simulate gene expression data using Dyn Gen (Cannoodt et al., 2021)... We consider NBA player movements (Linou, 2016), a real-world multi-agent trajectory dataset... 2Data extracted from the following code repository https://github.com/linouk23/NBA-Player-Movements last accessed 2022-09-28.
Dataset Splits Yes For spring data, we generate 10000 samples of each setting for training the models. Regarding testing, we compute all the metrics using 100 samples... In our experiments, we consider a sequence up to 200 time-steps (T = 100 for reconstruction and the rest 100 steps for prediction), which gives us a total training dataset of 150K samples and a test set of 6380 samples.
Hardware Specification Yes Our method has been implemented with Pytorch (Paszke et al., 2019), and the experiments have been performed on NVIDIA RTX 2080 Ti GPUs.
Software Dependencies No The paper mentions 'Pytorch (Paszke et al., 2019)' but does not specify a version number for PyTorch or any other software dependencies with specific version numbers. The citation refers to the paper describing PyTorch, not a version number for the software itself.
Experiment Setup Yes All SDCI and ACD (L owe et al., 2022) models have been trained using the following training scheme. Following Kipf et al. (2018), the models are trained using ADAM optimizer (Kingma & Ba, 2015). In all datasets, the learning rate of the edge labels encoder is 5 10 4 for variable graphs, and 5 10 3 for fixed graphs. The learning rate of the decoder is 5 10 4. Learning rate decay is in use with factor of 0.5 every 200 epochs. The variance of the decoder Gaussian distribution is σ2 = 5 10 5. The temperature term of the edge label encoder τ is set to 0.5. The state encoder temperature γ follows a schedule similar to Ansari et al. (2021) which prevents state collapse, i.e. the model ignoring states. We first set γ = 5, and decrease temperature every epoch by a factor of 0.8 until we have γ = 0.5.