Multiclass Boosting: Margins, Codewords, Losses, and Algorithms

Authors: Mohammad Saberian, Nuno Vasconcelos

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results confirm the superiority of MCBoost, showing that the two proposed MCBoost algorithms outperform comparable prior methods on a number of datasets.
Researcher Affiliation Academia Mohammad Saberian EMAIL Statistical Visual Computing Laboratory, University of California, San Diego La Jolla, CA 92039, USA
Pseudocode Yes Algorithm 1 CD-MCBoost and GD-MCBoost Input: Number of classes M, dimension d, codeword set Y = {y1, . . . , y M} Rd, boosting iterations N and dataset D = {(xi, ci)}n i=1 of examples xi and class labels ci {1, . . . , M}. Initialization: set t = 0, and ft = 0 Rd
Open Source Code Yes A Matlab implementation of CD-MCBoost and GD-MCBoost is available from http://www.svcl.ucsd.edu/publications/conference/2014/icml/ICML 2014 guess averse code data.zip. A C++ implementation of GD-MCBoost for deep convolutional neural networks (Moghimi et al., 2016) integrated with the CAFFE library (Jia et al., 2014) is available from https://github.com/mmoghimi/Boost CNN.
Open Datasets Yes The remaining experiments were based on the twelve UCI datasets of Table 6.
Dataset Splits Yes For these, we used the training/test set split provided by the dataset, whenever possible. If a split was unavailable, 20% of the examples were randomly selected for testing.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory) for running its experiments. While it mentions a C++ implementation for deep convolutional neural networks integrated with the CAFFE library, it does not state that the experiments presented in the paper were run on specific hardware, nor does it detail the hardware specifications if they were.
Software Dependencies No The paper mentions "Matlab" and "C++" implementations and integration with the "CAFFE library (Jia et al., 2014)", but it does not provide specific version numbers for any of these software components. The requirement is to include specific version numbers for key software components.
Experiment Setup Yes All classifiers were learned with 200 iterations of MCBoost. We considered both CD-MCBoost and GD-MCBoost, using decision stumps and trees of depth 2 as weak learners, respectively.