BEAM

Human Annotation

Objective human behavior studies rely on machine learning to discover associations between observations of a stimulus (features) and some known ground truth. Often this ground truth is generated by several humans assigning a somewhat subjective label or rating to the behavior or stimulus and even after averaging may not be an accurate representation of the underlying truth.
We study the human annotation process and offer new assumptions and methods for interpreting the annotation results that yield ground truth assessments that improve on machine learning performance without requiring careful tuning or modeling of individual annotators.
Check out our blog page for a simple introduction to the benefits of asking humans to rank instead of rate their observations.