The Face of Quality in Crowdsourcing Relevance Labels Demographics, Personality and Labeling Accuracy

Authors
Publication date 2012
Book title CIKM’12
Book subtitle the proceedings of the 21st ACM International Conference on Information and Knowledge Management : October 29–November 2, 2012 Maui, Hawaii, USA
ISBN (electronic)
  • 9781450311564
Event 21st ACM International Conference on Information and Knowledge Management, CIKM 2012
Pages (from-to) 2583-2586
Publisher New York, NY: Association for Computing Machinery
Organisations
  • Faculty of Humanities (FGw)
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Information retrieval systems require human contributed relevance labels for their training and evaluation. Increasingly such labels are collected under the anonymous, uncontrolled conditions of crowdsourcing, leading to varied output quality. While a range of quality assurance and control techniques have now been developed to reduce noise during or after task completion, little is known about the workers themselves and possible relationships between workers' characteristics and the quality of their work. In this paper, we ask how do the relatively well or poorly-performing crowds, working under specific task conditions, actually look like in terms of worker characteristics, such as demographics or personality traits. Our findings show that the face of a crowd is in fact indicative of the quality of their work.
Document type Conference contribution
Language English
Published at https://doi.org/10.1145/2396761.2398697
Permalink to this page
Back