Iterative Regularization for Learning with Convex Loss Functions
Authors: Junhong Lin, Lorenzo Rosasco, Ding-Xuan Zhou
JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Fig. 1 we consider simulated data, i.e. simple binary classification problem where the input space is two dimensional. The training and test error as a function of the number of iterations are reported for different stepsize values. In Fig. 2 we consider a real benchmark data-set and again report the training and test error for different stepsize values. The same qualitative behavior can be observed in simulated and real data. |
| Researcher Affiliation | Academia | All primary affiliations listed are academic institutions: "Department of Mathematics City University of Hong Kong", "DIBRIS, Universit a di Genova", and "Laboratory for Computational and Statistical Learning Istituto Italiano di Tecnologia and Massachusetts Institute of Technology". The email domains also correspond to academic institutions, with the exception of one hotmail.com address, but the primary affiliations are clearly academic. |
| Pseudocode | No | The proposed learning algorithm is presented as a mathematical iteration (1) ft+1 = ft ηt 1 m j=1 V (yj, ft(xj))Kxj. While the paper describes the numerical realization of the algorithm, it does not provide a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide links to a code repository or mention code in supplementary materials. |
| Open Datasets | Yes | In Fig. 2 we consider a real benchmark data-set and again report the training and test error for different stepsize values. The caption for Figure 2 further specifies: 'Misclassification errors of Algorithm (1) for the last iterates applied to Adult dataset...' |
| Dataset Splits | No | For simulated data, Figure 1 states 'In each trial, both of the training data and the test data are of 100.' However, for the 'Adult dataset' used in Figure 2, only 'm = 1500' is mentioned, without specifying how the data was split into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | Yes | Figure 1 caption states: 'setting η1 = 1, V (y, f) = max{1 yf, 0} and HK = R2.' Figure 2 caption states: 'setting V (y, f) = max{1 yf, 0}, K(x, x ) = exp{ x x 2 2σ2 } and m = 1500. Here, σ is chosen as the median of the vector that consists of all Euclidean distances between training input vectors with different labels (Jaakkola et al., 1999). For each θ, η1 is tuned using a holdout method.' |