Second-Order Non-Stationary Online Learning for Regression

Authors: Edward Moroshko, Nina Vaits, Koby Crammer

JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 7 we report results of simulations designed to highlight the properties of both algorithms, as well as the commonalities and differences between them. We evaluated our algorithms on four data sets, one synthetic and three real-world. The results are summarized in Figure 1. AROWR performs the worst on all data sets as it converges very fast and thus not able to track the changes in the data.
Researcher Affiliation Academia Edward Moroshko EMAIL Nina Vaits EMAIL Koby Crammer EMAIL Department of Electrical Engineering The Technion Israel Institute of Technology Haifa 32000, Israel
Pseudocode Yes Table 1: Algorithms for stationary setting and their extension to non-stationary case Table 2: ARCOR, LASER and CR-RLS algorithms
Open Source Code No The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide any links to a code repository or mention code in supplementary materials. It discusses previous publications of the algorithms but not code release.
Open Datasets Yes The last real-world data set was taken from the Kaggle competition Global Energy Forecasting Competition 2012 Load Forecasting.3 This data set includes hourly demand for four and a half years from 20 different geographic regions, and similar hourly temperature readings from 11 zones, which we used as features xt R11. 3. The data set was taken from http://www.kaggle.com/c/global-energy-forecasting-competition-2012-load-forecasting .
Dataset Splits Yes For the speech signal the algorithms parameters were tuned on 10% of the signal, then the best parameter choices for each algorithm were used to evaluate the performance on the remaining signal. Similarly, for the load data set the algorithms parameters were tuned on 20% of the signal.
Hardware Specification No The paper mentions running simulations and evaluating algorithms but does not specify any hardware details such as CPU models, GPU models, or memory specifications used for these experiments.
Software Dependencies No The paper does not provide specific software names with version numbers (e.g., programming languages, libraries, or frameworks) used to implement or run the experiments.
Experiment Setup No The paper mentions that "For the synthetic data set the algorithms parameters were tuned using a single random sequence. For the speech signal the algorithms parameters were tuned on 10% of the signal, then the best parameter choices for each algorithm were used to evaluate the performance on the remaining signal. Similarly, for the load data set the algorithms parameters were tuned on 20% of the signal." While it describes how parameters were tuned, it does not explicitly state the concrete values of these parameters (e.g., learning rates, regularization constants) or other specific hyperparameters for ARCOR or LASER during the experiments.