Noise audits improve moral foundation classification

Authors
  • N. Mokhberian
  • F.R. Hopp
  • B. Harandizadeh
  • F. Morstatter
  • K. Lerman
Publication date 2022
Host editors
  • J. An
  • C. Charalampos
  • W. Magdy
Book title Proceedings of the 2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Book subtitle ASONAM 2022 : FAB 2022, FOSINT-SI 2022, HI-BI-BI 2022 : Istanbul, Turkey (Hybrid), November 10-13, 2022
ISBN
  • 9781665456623
ISBN (electronic)
  • 9781665456616
Event 14th IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2022
Pages (from-to) 147-154
Publisher Piscataway, NJ: IEEE
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam School of Communication Research (ASCoR)
Abstract
Morality plays an important role in culture, identity, and emotion. Recent advances in natural language processing have shown that it is possible to classify moral values expressed in text at scale. Morality classification relies on human annotators to label the moral expressions in text, which provides training data to achieve state-of-the-art performance. However, these annotations are inherently subjective and some of the instances are hard to classify, resulting in noisy annotations due to error or lack of agreement. The presence of noise in training data harms the classifier's ability to accurately recognize moral foundations from text. We propose two metrics to audit the noise of annotations. The first metric is entropy of instance labels, which is a proxy measure of annotator disagreement about how the instance should be labeled. The second metric is the silhouette coefficient of a label assigned by an annotator to an instance. This metric leverages the idea that instances with the same label should have similar latent representations, and deviations from collective judgments are indicative of errors. Our experiments on three widely used moral foundations datasets show that removing noisy annotations based on the proposed metrics improves classification performance.11Our code can be found at: https://github.com/negar-mokhberian/noise-audits.
Document type Conference contribution
Language English
Published at https://doi.org/10.1109/ASONAM55673.2022.10068681
Other links https://github.com/negar-mokhberian/noise-audits https://www.proceedings.com/68329.html https://www.scopus.com/pages/publications/85152037499
Permalink to this page
Back