AdaWaveNet: Adaptive Wavelet Network for Time Series Analysis

Authors: Han Yu, Peikun Guo, Akane Sano

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on 10 datasets across 3 different tasks, including forecasting, imputation, and a newly established super-resolution task. The evaluations demonstrate the effectiveness of Ada Wave Net over existing methods in all three tasks, which illustrates its potential in various real-world applications.
Researcher Affiliation Academia Han Yu EMAIL Department of Electrical and Computer Engineering Rice University Peikun Guo EMAIL Department of Computer Science Rice University Akane Sano EMAIL Department of Electrical and Computer Engineering Rice University
Pseudocode No The paper describes the lifting scheme procedures as "Split", "Update", and "Predict" stages (Section 3.2) and further details the Adaptive Wavelet Block steps (Section 4.2) using equations and descriptive text, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes The code implemented for the Ada Wave Net is available at https://github.com/comp-well-org/Ada Wave Net.
Open Datasets Yes There are 10 datasets used in this study, and the overall summary of the datasets information can be seen in Table 5. Also, the descriptions of each set are introduced in this section. ETT (Electricity Transformer Temperature): The ETT dataset (Zhou et al., 2021)... Electricity Load Diagrams: The Electricity Load Diagrams dataset, sourced from the UCI Machine Learning Repository (Asuncion & Newman, 2007)... Solar Energy Prediction: The Solar Energy Prediction dataset from the UCI Machine Learning Repository (Asuncion & Newman, 2007)... PTB-XL: The PTB-XL (Wagner et al., 2020) dataset... Sleep-EDFE: The Sleep-EDF (expanded) (Kemp et al., 2000) dataset... CLAS: The CLAS dataset Markova et al. (2019)...
Dataset Splits Yes Table 5: Details of datasets used in the experiments. Data Split means the number of samples split into the train, validation, and test sets. (e.g., ETTm1: (34465, 11521, 11521) for train, validation, test respectively). For PTB-XL, we follow the recommended splits of training and test sets, which results in a training/testing ratio of 8/1.
Hardware Specification Yes The environment used in this evaluation is AWS g5 instances with Nvidia A10 GPUs.
Software Dependencies Yes All the modules are implemented in Py Torch 1.11 and a Python version of 3.10.
Experiment Setup Yes For the design of Ada Wave blocks, we adjust the depth of transformations in a range of 1 to 5; whereas the kernel size of the utilized convolutional kernels is adjusted based on the datasets. The number of clusters used in the grouped linear model varies from 1 to 9 depending on the channels of signals. Also, we adjust the learning rate for each set of experiments to achieve better convergent performances. The detailed hyperparameters of each dataset are listed in Table 7. The batch size is fixed as 16.