Robust Online Gesture Recognition with Crowdsourced Annotations

Authors: Long-Van Nguyen-Dinh, Alberto Calatroni, Gerhard Tröster

JMLR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the noise robustness of our methods against baselines which use dynamic time warping (DTW) and support vector machines (SVM). The experiments are performed on data sets with various gesture classes (10-17 classes) recorded from accelerometers on arms, with both real and synthetic crowdsourced annotations.
Researcher Affiliation Academia Long-Van Nguyen-Dinh EMAIL Alberto Calatroni EMAIL Gerhard Tr oster EMAIL Wearable Computing Lab ETH Z urich ETZ H 95, Gloriastrasse 35, Z urich 8092, Switzerland
Pseudocode No The paper describes the algorithms (Segmented LCSS and Warping LCSS) using mathematical equations and textual descriptions, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions using the LIBSVM library for SVM training, but it does not provide any explicit statement or link for the source code of the authors' own Segmented LCSS or Warping LCSS implementations.
Open Datasets Yes Skoda and Opportunity data sets can be downloaded from http://www.wearable.ethz.ch/resources/Dataset.
Dataset Splits Yes For each data set, we perform a 5-fold cross-validation. ... For SVM, the signals are passed through a sliding window, with 50% overlap.
Hardware Specification No The paper describes using accelerometers on arms as sensors and discusses wearable devices like smart watches, but it does not specify any hardware (CPU, GPU, memory, etc.) used for running the computational experiments or training models.
Software Dependencies No The paper mentions using "LIBSVM library (Chang and Lin, 2011) for training SVM." However, it does not provide a specific version number for the LIBSVM library or any other software dependencies used in their experiments.
Experiment Setup Yes The choice of k is performed through cross-validation or empirically. For the gesture data sets used in this paper, k = 20 provided a good tradeoffbetween complexity (kmeans complexity scales linearly with k) and performance. ... We calculate the rejection threshold to be below µ(c) by some standard deviations. ϵc = µ(c) h σ(c), with h = 0,1,2,... In our experiments, h = 1 provided a good performance. ... For SVM, the signals are passed through a sliding window, with 50% overlap.