Learning Data-adaptive Non-parametric Kernels
Authors: Fanghui Liu, Xiaolin Huang, Chen Gong, Jie Yang, Li Li
JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on various classification and regression benchmark data sets demonstrate that our non-parametric kernel learning framework achieves good performance when compared with other representative kernel learning based algorithms. |
| Researcher Affiliation | Academia | Fanghui Liu EMAIL Department of Electrical Engineering, ESAT-STADIUS, KU Leuven, B-3001, Belgium Xiaolin Huang EMAIL Institute of Image Processing and Pattern Recognition Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China Chen Gong EMAIL PCA Lab, Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education School of Computer Science and Engineering, Nanjing University of Science and Technology, 210094, China Department of Computing, Hong Kong Polytechnic University, Hong Kong SAR, China. Jie Yang EMAIL Institute of Image Processing and Pattern Recognition Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China Li Li EMAIL Department of Automation, BNRist, Tsinghua University, 100084, China |
| Pseudocode | Yes | Algorithm 1: Projected gradient method with Nesterov s acceleration for problem (4) Algorithm 2: Projected gradient method with Nesterov s acceleration for problem (22) |
| Open Source Code | Yes | The source code of our DANK model in Algorithm 1 can be found in http://www.lfhsgre.org. |
| Open Datasets | Yes | We conduct experiments on the UCI Machine Learning Repository with small scale data sets, 4 and three large data sets including EEG, ijcnn1 and covtype.5 Besides, we also compare these methods on the CIFAR-10 database for image classification.6 4. https://archive.ics.uci.edu/ml/datasets.html 5. All datasets are available at https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/ 6. https://www.cs.toronto.edu/ kriz/cifar.html |
| Dataset Splits | Yes | After normalizing the data to [0, 1]d by a minmax scaler, we randomly pick half of the data for training and the rest for test except for monks1, monks2, and monks3. In these three data sets, both training and test data have been provided. This data set contains 60,000 color images with the size of 32 32 3 in 10 categories, of which 50,000 images are used for training and the rest are for testing. |
| Hardware Specification | Yes | All the experiments implemented in MATLAB are conducted on a Workstation with an Intel Xeon E5-2695 CPU (2.30 GHz) and 64GB RAM. |
| Software Dependencies | No | All the experiments implemented in MATLAB are conducted on a Workstation with an Intel Xeon E5-2695 CPU (2.30 GHz) and 64GB RAM. The paper mentions MATLAB but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | The kernel width σ and the balance parameter C are tuned by 5-fold cross validation on a grid of points, i.e., σ = [2 5, 2 4, . . . , 25] and C = [2 5, 2 4, . . . , 25]. To avoid additional cross validation, we manually set the penalty parameter τ to 0.01. The regularization parameter η is fixed to α 2 2 obtained by SVM. The learning rate starts from 0.1 and then is divided by 10 at the 120-th, 160-th, and 200-th epoch. After that, for each image, a 4096 dimensional feature vector is obtained according to the output of the first fully-connected layer in this fine-tuned neural network. Set the stopping criteria tmax = 2000 and ϵ = 10 4. |