CSG-ODE: ControlSynth Graph ODE For Modeling Complex Evolution of Dynamic Graphs
Authors: Zhiqiang Wang, Xiaoyi Wang, Jianqing Liang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model on five diverse datasets: two synthetic datasets, Springs (Kipf et al., 2018) and Charged (Kipf et al., 2018), and three real-world datasets CMU motion capture data (walk capture from subject 35 and jump capture from subject 118) (CMU, 2003), and the PEMS08 traffic flow dataset (Song et al., 2020). Further details are provided in Appendix D. ... Table 1 presents the MSE for interpolation tasks across different datasets and methods. ... Table 2 shows the MSE for extrapolation tasks... Furthermore, we analyzed the potential sources of error in CSG-ODE. ... To further analyze the components of the model, we conduct an ablation study... To investigate the impact of the hyperparameter α... We performed a comparison experiment of SCSG-ODE with walk capture data from subject 35 with baselines. |
| Researcher Affiliation | Academia | 1Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi, China. Correspondence to: Jianqing Liang <EMAIL>. |
| Pseudocode | Yes | We summarize the learning algorithm of our CSG-ODE in Algorithm 1. Algorithm 1 Training Algorithm of CSG-ODE |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | We evaluate our model on five diverse datasets: two synthetic datasets, Springs (Kipf et al., 2018) and Charged (Kipf et al., 2018), and three real-world datasets CMU motion capture data (walk capture from subject 35 and jump capture from subject 118) (CMU, 2003), and the PEMS08 traffic flow dataset (Song et al., 2020). Further details are provided in Appendix D. |
| Dataset Splits | Yes | The subsampling ratio is set to 40%, 60%, and 80%, and the observations are independent across nodes. ... We split the time range into two segments: (t0, tn1) and (tn1, tn). ... We divide the walk experiments into non-overlapping training (16 trials) and test sets (7 trials), and similarly, the jump experiments into non-overlapping training (21 trials) and test sets (9 trials). ... for the training data, we divide the first 11,940 time steps into groups of 60 time steps, resulting in 199 non-overlapping subsequences. Observations for each node are randomly sampled from a uniform distribution U(40, 52), with n observations selected from each group of 60 time steps. For the test set, data from time steps [11,941,17,820] is selected, and every 120 time steps are divided into a group, generating 49 non-overlapping subsequences. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions 'ODESolve Euler Optimizer Adam' and various methods/architectures like GNN, VAE, ODE, MLPE, but does not provide specific version numbers for any software libraries, frameworks, or solvers used for implementation. |
| Experiment Setup | Yes | In the Table 8, we report the hyperparameters used for all datasets in the experiments. Hyper parameter Values Spring Charged Motion-walk Motion-jump PEMS08 batch 256 256 8 32 4 learing rate 0.0005 dropout 0.2 k 64 64 64 64 16 q 32 32 32 32 8 Alpha 0.5 M 2 clip 10 epoch 50 h 16 augment dim 64 64 64 64 0 L2 0.001 subnetworks width 128 128 128 128 64 subnetworks dipth 1 ODEsolve Euler Optimizer Adam |