SynEVO: A neuro-inspired spatiotemporal evolutional framework for cross-domain adaptation

Authors: Jiayue Liu, Zhongchao Yi, Zhengyang Zhou, Qihe Huang, Kuo Yang, Xu Wang, Yang Wang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that Syn EVO improves the generalization capacity by at most 42% under cross-domain scenarios and Syn EVO provides a paradigm of Neuro AI for knowledge transfer and adaptation. Code available at https://github.com/Rodger-Lau/Syn EVO. Extensive experiments show the collective intelligence increases the model generalization capacity under both source and temporal shifts by at 0.5% to 42%, including few-shot and zero-shot transfer, and empirically validate the convergency of progressive curriculum learning.
Researcher Affiliation Academia 1University of Science and Technology of China (USTC), Hefei, China 2Suzhou Institute for Advanced Research, USTC, Suzhou, China 3State Key Laboratory of Resources and Environmental Information System, Beijing, China. Correspondence to: Zhengyang Zhou <EMAIL>, Yang Wang <EMAIL>.
Pseudocode No The paper describes methods in prose and mathematical equations but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/Rodger-Lau/Syn EVO.
Open Datasets Yes We collect and process four datasets for our experiments: 1) NYC (New York City, 2016)... 2) CHI (CHICAGO, 2023)... 3) SIP... 4) SD (Liu et al., 2023)... CHICAGO. Chi dataset. Website, 2023. https://data.cityofchicago.org/browse. New York City. Nyc dataset. Website, 2016. https://www.nyc.gov/site/tlc/about/ tlc-trip-record-data.page. Liu, X., Xia, Y., Liang, Y., Hu, J., Wang, Y., Bai, L., Huang, C., Liu, Z., Hooi, B., and Zimmermann, R. Largest: A benchmark dataset for large-scale traffic forecasting. In Advances in Neural Information Processing Systems, 2023.
Dataset Splits Yes We split the datasets into training, validation and testing sets with the ratio of 7:1:2.
Hardware Specification Yes We run STGODE, STTN, CMu ST on SD on NVIDIA A100-PCIE-40GB and other experiments on Tesla V100-PCIE-16GB by adapting the model scale with GPU versions.
Software Dependencies No For curriculum-guided task reordering, Adam optimizer (Kingma, 2014) is applied with initialized learning rate of 0.01 and weight decay of 0.001 for the initial learnable model Mc. For complementary dual learners, we use the mean square error (MSE) as the criterion D of the personality extractor. The paper mentions optimizers and loss functions but does not specify software library versions (e.g., PyTorch, TensorFlow, Python version) for the implementation.
Experiment Setup Yes For curriculum-guided task reordering, Adam optimizer (Kingma, 2014) is applied with initialized learning rate of 0.01 and weight decay of 0.001 for the initial learnable model Mc. For complementary dual learners, we use the mean square error (MSE) as the criterion D of the personality extractor. For Elastic Common Container, the loss criterion is adopted with widely-used Masked MAELoss. Hyperparameter Sensitivity Analysis: ...optimal settings are κ = 1 103 on all datasets, p0 = 0.5, λ0 = 0.05 on NYC and SIP, p0 = 1, λ0 = 0.1 on CHI and p0 = 0.7, λ0 = 0.07 on SD.