On lp-Support Vector Machines and Multidimensional Kernels
Authors: Victor Blanco, Justo Puerto, Antonio M. Rodriguez-Chia
JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We adapt known solution algorithms to efficiently solve the primal and dual resulting problems and some computational experiments on real-world datasets are presented showing rather good behavior in terms of the accuracy of ℓp-SVM with p > 1. In the application side, we show that the use of ℓp SVM outperforms the classification results with respect to standard SVM applied to four well-known datasets: cleveland, housing, german credit and colon (see Section 5). Finally, in Section 5, the results of some computational experiments on real-world datasets are reported. |
| Researcher Affiliation | Academia | V ıctor Blanco EMAIL IEMath-GR, Universidad de Granada, SPAIN Justo Puerto EMAIL IMUS, Universidad de Sevilla, SPAIN Antonio M. Rodr ıguez-Ch ıa EMAIL Dpt. Statistics & OR, Universidad de C adiz, SPAIN |
| Pseudocode | No | The paper describes mathematical formulations and solution strategies using mathematical optimization concepts and transformations, but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions that the problems were "coded in Python 3.6, and solved using Gurobi 7.51" and "classical RBF (ℓ2)SVM (see Table 4) using the scikit-learn library of Python". However, it does not state that the authors' implementation code for the methodology described in the paper is open-source or available. |
| Open Datasets | Yes | The models were tested in five classical data sets, widely used in the literature of SVM, that are listed in Table 1. They were obtained from the UCI Repository (Radhimeenakshi, 2016), LIBSVM Datasets(Chang and Lin, 2011) and Keel Datasets (Alcal a et al., 2011). |
| Dataset Splits | Yes | In order to obtain stable and meaningful results, we use a 10-fold cross validation scheme to train the model and to test its performance. |
| Hardware Specification | Yes | The resulting primal Second Order Cone Programming (SOCP) problems were coded in Python 3.6, and solved using Gurobi 7.51 in a Mac OSX El Capitan with an Intel Core i7 processor at 3.3 GHz and 16GB of RAM. |
| Software Dependencies | Yes | The resulting primal Second Order Cone Programming (SOCP) problems were coded in Python 3.6, and solved using Gurobi 7.51 in a Mac OSX El Capitan with an Intel Core i7 processor at 3.3 GHz and 16GB of RAM. ... We have run the classical RBF (ℓ2)SVM (see Table 4) using the scikit-learn library of Python |
| Experiment Setup | Yes | We construct, in our experiments, ℓp-SVM separators for p {4/2, 2, 3} by using ηorder approximations with η ranging in {1, 2, 3, 4}. ... For each dataset, we consider a part of the training sample and run the models by moving C and σ over the grid {2k : k {-7, -6, ..., 6, 7}}. |