Actively Estimating Crowd Annotation Consensus

Authors: Yunus Emre Kara, Gaye Genc, Oya Aran, Lale Akarun

JAIR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the benchmark datasets used in the literature and the Head Pose Annotations datasets suggest that our method provides high-quality consensus by using as few as one fifth of the annotations ( 80% cost reduction), thereby providing a budget and time-sensitive solution to the crowd-labeling problem.
Researcher Affiliation Academia Yunus Emre Kara EMAIL Department of Computer Engineering, Bogazici University TR-34342, Bebek, Istanbul, Turkey Gaye Genc EMAIL Department of Computer Engineering, Bogazici University TR-34342, Bebek, Istanbul, Turkey Oya Aran EMAIL Independent Researcher Lale Akarun EMAIL Department of Computer Engineering, Bogazici University TR-34342, Bebek, Istanbul, Turkey
Pseudocode Yes Algorithm 1 ACL: Active Crowd-Labeling Algorithm 2 Request Annotation: Requesting annotation for improving the existing consensus Algorithm 3 Request Annotation Exp: Requesting annotation for smart label collection from scratch Algorithm 4 Create Starting Set By Elimination
Open Source Code No The paper does not provide any concrete access to source code (no specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets Yes We evaluate the results of the proposed active crowd-labeling method using nine real datasets: two Head Pose Annotations datasets (tilt, pan) which are introduced in this paper, the Kara Age Annotations dataset (Kara et al., 2015) and six Affective Text Analysis datasets (anger, disgust, fear, joy, sadness, surprise) of Snow, O Connor, Jurafsky, and Ng (2008).
Dataset Splits Yes For each dataset, we prepare 100 different subsets satisfying these conditions. We fix the subset sizes, i.e. number of annotations, to 2100 for the Kara Age Annotations dataset, 1110 for the Head Pose Annotations datasets, and 200 for the Affective Text Analysis datasets. The details of the subset selection algorithm are given in Appendix B.
Hardware Specification No The paper does not provide any specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers needed to replicate the experiment.
Experiment Setup Yes In Figure 6a, we present the results of our method s effect on mean absolute error in terms of age by trying out different dominance suppression coefficients ϕ on the Kara Age Annotations dataset. ... In Figure 8a, we compare O-CBS+ with fixed dominance suppression coefficient of ϕ = 5 for different E values. ... To address this concern, we aim to stop the annotation process upon attaining satisfactory sample consensus values for all samples by setting a target on the sample consensus posterior variance, namely δ. This is equivalent to stopping the active crowd-labeling pro- cess when every sample has a satisfactory score SS, i.e. min i SS(i) > 1 δ |{z} τ since SS is the reciprocal of the posterior variance.