Transfer Learning Decision Forests for Gesture Recognition

Authors: Norberto A. Goussies, Sebastián Ubalde, Marta Mejail

JMLR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate improvements over traditional decision forests in the Cha Learn Gesture Challenge and MNIST data set. They also compare favorably against other state-of-the-art classifiers.
Researcher Affiliation Academia Norberto A. Goussies EMAIL Sebasti an Ubalde EMAIL Marta Mejail EMAIL Departamento de Computaci on, Pabell on I Facultad de Ciencias Exactas y Naturales Universidad de Buenos Aires Ciudad Aut onoma de Buenos Aires, C1428EGA Argentina
Pseudocode No The paper describes the algorithms and methods in prose and mathematical equations (e.g., Section 3.1 Training, 3.1.1 Mixed Information Gain) but does not include a dedicated pseudocode or algorithm block.
Open Source Code No The paper does not provide any concrete access information to source code, such as a repository link, an explicit statement of code release, or mention of code in supplementary materials.
Open Datasets Yes Our experiments demonstrate improvements over traditional decision forests in the Cha Learn Gesture Challenge and MNIST data set. (Guyon et al., 2012) (Le Cun et al., 1998)
Dataset Splits Yes For each digit 0 . . . 9, we consider a binary task where label +1 means that the example belongs to the digit associated with the respective task, and label 1 means the opposite. We randomly choose 100 training samples for each task and test them on the 10, 000 testing samples. The experiments are repeated ten times and the results are summarized in Table 4. The data set is organized into batches, with only one training example of each gesture in each batch.
Hardware Specification No The paper mentions decision forests can be parallelized, which makes them ideal for GPU (Sharp, 2008) and multi-core implementations, but it does not specify the hardware used for the experiments described in the paper.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For the MHI computation in this section, we set the temporal extent τ = 8, the threshold ξ = 25, and reduce the spatial resolution of each frame to ω1 ω2 = 16 12 pixels. ... To train the TLDFs, we set the number of trees T = 50, the maximum depth D = 8, the mixing coefficient γ = 25%, and the size of the search space |T | = 50. ... We train the TLDFs with D = 6, T = 40, γ = 50%, and we do not apply any preprocessing to the sample images.