fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement

Open Access
Authors
Publication date 12-2024
Journal Minds and Machines
Article number 37
Volume | Issue number 34 | 4
Number of pages 34
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract

To prevent ordinary people from being harmed by natural language processing (NLP) technology, finding ways to measure the extent to which a language model is biased (e.g., regarding gender) has become an active area of research. One popular class of NLP bias measures are bias benchmark datasets—collections of test items that are meant to assess a language model’s preference for stereotypical versus non-stereotypical language. In this paper, we argue that such bias benchmarks should be assessed with models from the psychometric framework of item response theory (IRT). Specifically, we tie an introduction to basic IRT concepts and models with a discussion of how they could be relevant to the evaluation, interpretation and improvement of bias benchmark datasets. Regarding evaluation, IRT provides us with methodological tools for assessing the quality of both individual test items (e.g., the extent to which an item can differentiate highly biased from less biased language models) as well as benchmarks as a whole (e.g., the extent to which the benchmark allows us to assess not only severe but also subtle levels of model bias). Through such diagnostic tools, the quality of benchmark datasets could be improved, for example by deleting or reworking poorly performing items. Finally, in regards to interpretation, we argue that IRT models’ estimates for language model bias are conceptually superior to traditional accuracy-based evaluation metrics, as the former take into account more information than just whether or not a language model provided a biased response.

Document type Article
Language English
Published at https://doi.org/10.1007/s11023-024-09695-9
Other links https://www.scopus.com/pages/publications/85203086567
Downloads
s11023-024-09695-9 (Final published version)
Permalink to this page
Back