GluonTS: Probabilistic and Neural Time Series Modeling in Python

Authors: Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Rangapuram, David Salinas, Jasper Schulz, Lorenzo Stella, Ali Caner Türkmen, Yuyang Wang

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Table 1 displays the mean quantile loss of various pre-built models in Gluon TS on 10 open-source datasets: hourly electricity consumption of 370 customers (Dheeru and Karra Taniskidou, 2017), daily exchange rate between 8 currencies used in (Lai et al., 2017), 6 datasets from the M4 competition (Makridakis et al., 2018), hourly photo-voltaic production of 137 stations in Alabama State used in (Lai et al., 2017), and hourly occupancy rate between 0 and 1 of 963 car lanes of San Francisco bay area freeways (Dheeru and Karra Taniskidou, 2017).
Researcher Affiliation Industry Alexander Alexandrov EMAIL ... Yuyang Wang EMAIL Amazon Research Charlottenstrasse 4, 10969, Berlin, Germany 1900 University Ave., East Palo Alto, CA 94303, US
Pseudocode No The paper describes various models and components, such as Deep AR, MQ-RNN, and Deep State, and illustrates their structure in Figure 1b, but it does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes We fill this gap with Gluon TS1, a deep learning based library based on the Gluon API2 of the MXNet deep learning framework. 1. https://github.com/awslabs/gluon-ts
Open Datasets Yes Table 1 displays the mean quantile loss of various pre-built models in Gluon TS on 10 open-source datasets: hourly electricity consumption of 370 customers (Dheeru and Karra Taniskidou, 2017), daily exchange rate between 8 currencies used in (Lai et al., 2017), 6 datasets from the M4 competition (Makridakis et al., 2018), hourly photo-voltaic production of 137 stations in Alabama State used in (Lai et al., 2017), and hourly occupancy rate between 0 and 1 of 963 car lanes of San Francisco bay area freeways (Dheeru and Karra Taniskidou, 2017).
Dataset Splits No The paper mentions that 'The backtest package splits all time series in the dataset at a certain point in time, and uses the first part for training the model and the second part for evaluating the accuracy.' However, it does not provide specific details on the split percentages, sample counts, or the exact 'certain point in time' for the experiments reported in Table 1.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or other compute infrastructure used for running the experiments. It only mentions that Gluon TS 'can be run directly on a local machine' or scaled up through 'Amazon Sage Maker'.
Software Dependencies No The paper mentions several software components, including 'Python', 'Gluon API', and the 'MXNet deep learning framework'. It also refers to 'scikit-learn' and 'matplotlib'. However, it does not provide specific version numbers for any of these software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes The paper states, 'the hyperparameters the hyperparameters of each method are fixed across all datasets and the training is limited to 5000 gradient updates.' This provides a specific training configuration regarding the number of gradient updates.