Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Bayesian Co-Boosting for Multi-modal Gesture Recognition
Authors: Jiaxiang Wu, Jian Cheng
JMLR 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments using the Cha Learn MMGR and Ch Air Gest data sets, in which our approach achieves 97.63% and 96.53% accuracy respectively on each publicly available data set. |
| Researcher Affiliation | Academia | Jiaxiang Wu EMAIL Jian Cheng EMAIL National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences Beijing, 100190, China |
| Pseudocode | Yes | Algorithm 1 Bayesian Co-Boosting Training Framework. Algorithm 2 Weak Classifier Training Algorithm 3 Instance s Weight Updating |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide any links to a code repository or indicate that code is in supplementary materials. |
| Open Datasets | Yes | We perform extensive experiments using the Cha Learn MMGR and Ch Air Gest data sets, in which our approach achieves 97.63% and 96.53% accuracy respectively on each publicly available data set. Detailed information about this data set can be found in Escalera et al. (2013). In Ruffieux et al. (2013), a multi-modal data set was collected to provide a benchmark for the development and evaluation of gesture recognition methods. |
| Dataset Splits | Yes | Cha Learn MMGR data set: The data set has been divided into three subsets already, namely Development, Validation, and Evaluation. In our experiment, Development and Validation subsets are used respectively for model training and testing. Ch Air Gest data set: Since no division of training and testing subset is specified in this data set, we perform leave-one-out cross validation. In each round, gesture instances of one subject are used for model evaluation, and other instances are used to train the model. |
| Hardware Specification | No | The paper mentions data acquisition equipment like "Kinect TM sensor" and "Xsens inertial motion units," but it does not specify any hardware used for running the experiments or training the models (e.g., CPU, GPU models, or memory). |
| Software Dependencies | No | The paper describes algorithms such as the Baum-Welch algorithm and Viterbi algorithm, and references features like MFCC, but it does not list any specific software libraries, frameworks, or operating systems with their version numbers that were used for the implementation or experiments. |
| Experiment Setup | Yes | In this experiment, parameters in Algorithm 1 are chosen as follows: T = 20, V = 2, M1 = 5, and M2 = 10. For MFCC feature, the size of feature subset is set to be 50% of all feature dimensions. The skeleton feature subset consists of 15% dimensions from the original feature space. Therefore, the number of feature dimensions used to train weak classifiers is respectively 20 for audio and 21 for skeleton. The number of iterations to estimate parameters of hidden Markov models for weak classifiers is set to 20. In Algorithm 1, parameters are: T = 20, V = 2, and M1 = M2 = 10. The feature selection ratio of Xsens and skeleton are respectively 20% and 15%. |