Challenges in multimodal gesture recognition

Authors: Sergio Escalera, Vassilis Athitsos, Isabelle Guyon

JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. ... Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.
Researcher Affiliation Academia Sergio Escalera EMAIL Computer Vision Center UAB and University of Barcelona Vassilis Athitsos EMAIL University of Texas Isabelle Guyon EMAIL Cha Learn, Berkeley, California
Pseudocode No The paper describes methodologies conceptually and references existing algorithms (e.g., HMM, SVM, Deep Learning architectures) but does not include any explicit pseudocode or algorithm blocks within its text.
Open Source Code Yes We provided code to browse though the data, a library of computer vision and machine learning techniques written in Matlab featuring examples drawn from the challenge datasets, and an end-to-end baseline system capable of processing challenge data and producing a sample submission. ... For a long lasting impact, the challenge platform, the data and software repositories have been made available for further research.
Open Datasets Yes Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands of videos, which are available to conduct further research. ... The full dataset is available from http://gesture.chalearn.org/data. ... This dataset, available at http://sunai.uoc.edu/chalearn, presents various features of interest as listed in Table 5.
Dataset Splits Yes More specifically, each batch was split into a training set (of one example for each gesture) and a test set of short sequences of one to 5 gestures. ... The data also included 20 validation batches and 20 final evaluation batches as transfer domain data ... The data set contains the following number of sequences, development: 393 (7.754 gestures), validation: 287 (3.362 gestures), and test: 276 (2.742 gestures)
Hardware Specification No The paper mentions that a method was implemented "close to real time on a regular laptop" and that software "processes all the batches of the final test set on a regular laptop in a few hours." However, "regular laptop" is a vague term and does not provide specific hardware details (e.g., CPU, GPU models, memory amounts).
Software Dependencies No The winner of both rounds (Alfonso Nieto Casta non of Spain, a.k.a. alfnie) used a novel technique ... he implemented it in Matlab ... Both teams provided Matlab software ... Python s scikit-learn was used to train two models. The paper mentions software like Matlab and scikit-learn but does not provide specific version numbers for these tools.
Experiment Setup No The paper primarily surveys and summarizes existing work and challenge results, often describing general approaches (e.g., "an ensemble of randomized decision trees (Extra Trees Classifier, 100 trees, 40% of features) and a K-Nearest Neighbor model (7 neighbors, L1 distance)"). However, it does not explicitly provide a comprehensive experimental setup with specific hyperparameters, learning rates, batch sizes, or detailed training configurations for any single experiment that would allow direct reproduction of results described in the paper as a whole.