On the Identification of Temporal Causal Representation with Instantaneous Dependence

Authors: Zijian Li, Yifan Shen, Kaitao Zheng, Ruichu Cai, Xiangchen Song, Mingming Gong, Guangyi Chen, Kun Zhang

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on simulation datasets illustrate that our method can identify the latent causal process. Furthermore, evaluations on human motion forecasting benchmarks indicate the effectiveness in real-world settings. Source code is available at https://github.com/DMIRLAB-Group/IDOL. 5.1 EXPERIMENTS ON SIMULATION DATA. 5.2 EXPERIMENTS ON REAL-WORLD DATA. Quantitative Results: Experiment results of the simulation datasets are shown in Table 2.
Researcher Affiliation Academia Carnegie Mellon University, Pittsburgh PA, USA Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE Guangdong University of Technology, Guangzhou, China The University of Melbourne
Pseudocode No The paper describes the model architecture and components (e.g., encoder, decoder, prior networks, sparsity regularization) and includes Figure 3 as a 'framework' diagram, and Table A4 with 'Architecture details'. However, it does not feature any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or sections with structured, step-by-step procedures.
Open Source Code Yes Source code is available at https://github.com/DMIRLAB-Group/IDOL.
Open Datasets Yes To evaluate the effectiveness of our IDOL method in real-world scenarios, we conduct experiments on two human motion datasets: Human3.6M (Ionescu et al., 2014) and Human Eva-I (Sigal et al., 2010)
Dataset Splits Yes Data Generation. We generate the simulated time series data with the fixed latent causal process as introduced in Equations (1)-(2) and Figure 1 (c). [...] The total size of the dataset is 100,000, with 1,024 samples designated as the validation set. The remaining samples are the training set. (Appendix F.1.1) [...] For each dataset, we select several motions and partition them into training, validation, and test sets. (Section 5.2.1)
Hardware Specification No The paper states: 'Specifically, we use a consistent hardware setup including the same GPU, CPU, and memory configurations for each model to ensure comparability.' (Appendix G.1). However, it does not provide specific details such as the models of the GPU or CPU used, nor the amount of memory.
Software Dependencies No The paper mentions utilizing 'publicly released code for TDRL and i CRITIS' and implementing G-Ca RL 'based on the paper' (Appendix E.3.1). It also references 'MLPs' and 'Leaky Re LU' (Appendix F.1.1). However, no specific software names with version numbers (e.g., PyTorch 1.9, Python 3.8) are provided to replicate the experimental environment.
Experiment Setup Yes Finally, the total loss of the IDOL model can be formalized as: Ltotal = Lr αLK + βLS, (12) where α, β denote the hyper-parameters. (Section 4.3) [...] For all datasets, we set sequence length as 5 and transition lag as 1. (Appendix F.1.1) [...] We repeat each experiment over 3 random seeds and publish the average performance. (Appendix F.2.1) [...] Architecture details of the proposed method are shown in Table A4. (Appendix E.3)