Random Feature-based Online Multi-kernel Learning in Environments with Unknown Dynamics
Authors: Yanning Shen, Tianyi Chen, Georgios B. Giannakis
JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Tests with synthetic and real datasets are carried out to showcase the effectiveness of the novel algorithms.1 |
| Researcher Affiliation | Academia | Yanning Shen EMAIL Tianyi Chen EMAIL Georgios B. Giannakis EMAIL Department of Electrical and Computer Engineering, University of Minnesota Minneapolis, MN, 55455, USA |
| Pseudocode | Yes | Algorithm 1 Raker for online MKL in static environments Algorithm 2 Ada Raker for online MKL in dynamic environments |
| Open Source Code | No | The paper does not explicitly provide an open-source code link or an unambiguous statement about its availability for the described methodology. |
| Open Datasets | Yes | Performance is tested on benchmark datasets from UCI machine learning repository (Lichman, 2013). Twitter dataset, Tom's hardware dataset, energy dataset, air quality dataset, Movement dataset, Electronic Device dataset, Human Activity dataset. |
| Dataset Splits | No | The paper describes online learning, where data arrives sequentially, and evaluates performance using cumulative metrics like MSE(t) := (1/t) Pt τ=1 (yτ ˆyτ)2. It does not provide explicit training/test/validation dataset splits. |
| Hardware Specification | No | The paper mentions 'CPU time' in its results (Table 2, Table 7, Table 9), but does not provide specific details about the hardware (e.g., CPU or GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as programming languages or library names with version numbers, used for implementing the algorithms. |
| Experiment Setup | Yes | For all MKL approaches, the stepsize for updating kernel combination weights in (21) is chosen as 0.5 uniformly... The regularization parameter is set equal to λ = 0.01 for all approaches... For OMKL-B, B = 20 and 50 most recent data samples were kept in the budget; and for RF-based Raker and Ada Raker approaches, D = 20 and 50 orthogonal random features were used by default. The default stepsize is chosen as 1/T for RBF, POLY, LINEAR, Avg MKL, OMKL, OMKL-B and Raker. [...] Ada Raker, multiple instances are initialized on intervals with length |I| := 20, 21, 22, . . ., along with the corresponding learning rate on the interval I as η(I) := min{1/2, 10/ p |I|}. [...] Classification task: logistic loss as the learning objective function with the regularization parameter λ = 0.005. |