Supervised Learning via Euler's Elastica Models
Authors: Tong Lin, Hanlin Xue, Ling Wang, Bo Huang, Hongbin Zha
JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have demonstrated the effectiveness of the proposed model for binary classification, multi-class classification, and regression tasks. |
| Researcher Affiliation | Academia | Key Laboratory of Machine Perception (Ministry of Education) School of Electronics Engineering and Computer Science Peking University, Beijing, 100871, China |
| Pseudocode | No | The paper describes numerical algorithms in text within Section 4 ('Numerical Algorithms'), but does not provide structured pseudocode blocks or algorithms. |
| Open Source Code | No | The paper mentions using 'LIBSVM implementation (Chang and Lin, 2011)' and 'Matlab neural network toolbox', which are third-party tools. It does not provide explicit statements or links for the authors' own implementation code. |
| Open Datasets | Yes | We collected real data sets from the libsvm website (Chang and Lin, 2011) and the UCI machine learning repository (Asuncion and Newman, 2013). |
| Dataset Splits | Yes | The optimal parameters for each algorithm are selected by grid search using 5-fold cross-validation. ... For each data set, we randomly run the 5-fold cross validation ten times to reduce the influence of data partitions. |
| Hardware Specification | Yes | The experiments are conducted on a PC Sever with two Intel Xeon 5620 cores and 8GB RAM. |
| Software Dependencies | No | The paper mentions 'LIBSVM implementation (Chang and Lin, 2011)' and 'Back-Propagation Neural Networks (BPNN) in the Matlab neural network toolbox' but does not specify version numbers for these software components. |
| Experiment Setup | Yes | The optimal parameters for each algorithm are selected by grid search using 5-fold cross-validation. ... only two common parameters are searched for all methods except BPNN: (C, g) for SVM, while (c, λ) for LR, TV, and EE. Empirically, the parameter η is set as 1 for LR, and the parameter b is fixed as 0.01 for EE. ... the two common parameters are searched from 10 : 10 in logarithm with step 2. The maximum number of iterations in GD and LAG is empirically setting as 40. All data sets are scaled into [0,1] before training and testing. |