Nystrom Regularization for Time Series Forecasting

Authors: Zirui Sun, Mingwei Dai, Yao Wang, Shao-Bo Lin

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A series of numerical experiments are carried out to verify our theoretical results, showing the excellent learning performance of Nystr om regularization with sequential sub-sampling in learning massive time series data. In this section, we conduct both toy simulations and three real world time series forecasting experiments to verify our theoretical statements and show the excellent learning performance of Nystr om regularization with sequential sub-sampling.
Researcher Affiliation Academia Zirui Sun EMAIL Center for Intelligent Decision-Making and Machine Learning School of Management Xi an Jiaotong University Xi an, China Mingwei Dai EMAIL Center of Statistical Research and School of Statistics Southwestern University of Finance and Economics Chengdu, China Yao Wang EMAIL Center for Intelligent Decision-Making and Machine Learning School of Management Xi an Jiaotong University Xi an, China Shao-Bo Lin EMAIL Center for Intelligent Decision-Making and Machine Learning School of Management Xi an Jiaotong University Xi an, China
Pseudocode Yes Algorithm 1: Nystr om regularization with sequential sub-sampling
Open Source Code Yes The code is available at https://github.com/zirsun/Nystrom.git.
Open Datasets Yes WTI data: The data (https://datahub.io/core/oil-prices) is daily recorded and from January 2, 1986 to August 31, 2020. BITCOIN(BTC) data: The BTC data collected via https://www.kaggle.com/prasoonkottarathil/btcinusd record the price of BTC from September 17, 2014 to April 9, 2020 within each minute. Western Australia Weather data: Western Australia Weather data (https://www.kaggle.com /datasets/sveneschlbeck/westaustralia-weather-1944-2016) record Western Australia s daily average temperatures from June 3, 1944 to December 31, 2016.
Dataset Splits Yes In all simulations, we consider two time series: nonlinear model (25) with εt the independent noise satisfying εt U( 0.7, 0.7) (here U(a, b) represents the uniform distribution on (a, b)), and Markov chains with Bernoulli distribution (26), where {εt}T t=1 are i.i.d. drawn from the Bernoulli distribution B(1/2) and are independent of x0. It should be mentioned that the time series generated by (25) is an α-mixing sequence (Alquier et al., 2013) while that generated by (26) is a τ-mixing sequence but not an α-mixing sequence (Dedecker and Prieur, 2005).
Hardware Specification Yes Our numerical experiments were carried out in Matlab R2018b with Intel(R) Xeon(R) Gold 6248R CPU @3.00GHz 2.99GHz, Windows 10.
Software Dependencies Yes Our numerical experiments were carried out in Matlab R2018b with Intel(R) Xeon(R) Gold 6248R CPU @3.00GHz 2.99GHz, Windows 10.
Experiment Setup Yes Simulation 1: In this simulation, we aim at studying the relation between learning performance of Nystr om regularization and sub-sampling ratio... The regularization parameters λ are selected from [5 10 4 : 5 10 4 : 0.01] (the first value is the lower bound of range, the second value is the step size, and the third one is the upper bound of the range) and [5 10 4 : 5 10 5 : 0.001] via grid search for M1 and M2 respectively.