Learning Using Privileged Information: Similarity Control and Knowledge Transfer

Authors: Vladimir Vapnik, Rauf Izmailov

JMLR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 1 illustrates performance (defined as an overage of error rate) of three algorithms each trained of 50 randomly selected subsets of sizes 64, 96, 128, 160, and 192: SVM in space X, SVM in space X , and SVM in space with transferred knowledge. Figure 1 shows that, the larger is the training size, the better is the effect of knowledge transfer.
Researcher Affiliation Collaboration Vladimir Vapnik EMAIL Columbia University New York, NY 10027, USA Facebook AI Research New York, NY 10017, USA Rauf Izmailov EMAIL Applied Communication Sciences Basking Ridge, NJ 07920-2021, USA
Pseudocode No The paper provides mathematical formulations and descriptions of algorithms (e.g., SVM+, SVM), but does not include any clearly labeled pseudocode or algorithm blocks with structured steps formatted like code.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology described, nor does it provide links to a code repository.
Open Datasets No Section 5.5 describes an example using "pre-processed video snapshots of a terrain" and "pictures with specific targets". While the type of data is described, no specific access information (link, DOI, repository, or citation to a public dataset) is provided for this dataset.
Dataset Splits No Section 5.5 mentions "trained of 50 randomly selected subsets of sizes 64, 96, 128, 160, and 192" and "Parameters for SVMs with RBF kernel were selected using standard grid search with 6-fold cross validation." However, it does not specify the training/test/validation splits used for the overall evaluation of the algorithms' performance, only the sizes of training subsets for a figure and a method for parameter selection.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions SVM and RBF kernels but does not specify any software names with version numbers (e.g., Python, PyTorch, scikit-learn versions) that would be needed to replicate the experiments.
Experiment Setup No Section 5.5 states "Parameters for SVMs with RBF kernel were selected using standard grid search with 6-fold cross validation." However, it does not provide the actual hyperparameter values chosen (e.g., C, gamma) or other specific training configurations like learning rates or batch sizes.