TimeDP: Learning to Generate Multi-Domain Time Series with Domain Prompts
Authors: Yu-Hao Huang, Chang Xu, Yueying Wu, Wu-Jun Li, Jiang Bian
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our method outperforms baselines to provide the state-of-the-art in-domain generation quality and strong unseen domain generation capability. The 'Experiments' section details empirical evaluations, including performance tables (Table 1, Table 2, Table 3) comparing Time DP with baselines using metrics like Maximum Mean Discrepancy and K-L divergence across various real-world datasets. |
| Researcher Affiliation | Collaboration | 1National Key Laboratory for Novel Software Technology, School of Computer Science, Nanjing University 2Microsoft Research Asia 3Peking University EMAIL, EMAIL EMAIL, EMAIL. The authors have affiliations with academic institutions (Nanjing University, Peking University) and an industry research lab (Microsoft Research Asia), indicating a collaborative effort. |
| Pseudocode | Yes | Algorithm 1: Training algorithm and Algorithm 2: Sampling with domain prompts. The paper explicitly includes two algorithm blocks labeled 'Algorithm 1' and 'Algorithm 2', detailing the training and sampling procedures respectively. |
| Open Source Code | No | The text is ambiguous or lacks a clear, affirmative statement of release. No specific repository link or explicit code release statement for the methodology described in this paper is provided. The link 'https://arxiv.org/abs/2501.05403' points to an extended version of the paper, not source code. |
| Open Datasets | Yes | The experiments are conducted on 12 datasets across four time series domains: Electricity, Solar and Wind from the energy domain; Traffic, Taxi and Pedestrian from the transport domain, Air Quality, Temperature and Rain from the nature domain; NN5, Fred-MD and Exchange from the economic domain. All datasets are obtained by Gluon TS package and Monash Time Series Forecasting Repository. |
| Dataset Splits | No | The paper mentions pre-processing datasets into 'non-overlapping uni-variate sequence slices with length in {24, 96, 168, 336}' and describes selecting 'few-shot samples' for unseen domain evaluation, but it does not provide specific percentages, sample counts, or detailed methodologies for training, validation, and test splits for all experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions utilizing 'a U-Net architecture for our denoising model' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific libraries) needed to replicate the experiment. |
| Experiment Setup | Yes | Models for each sequence length are trained for 50, 000 steps using a batch size of 128 and a learning rate of 1 10 4 with 1, 000 warm-up steps. |