LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks

Open Access
Authors
  • A. Bavaresco ORCID logo
  • Raffaella Bernardi
  • Leonardo Bertolazzi
  • Desmond Elliott
Publication date 2025
Host editors
  • W. Che
  • J. Nabende
  • E. Shutova
  • M.T. Pilehvar
Book title The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) : proceedings of the conference
Book subtitle ACL 2025 : July 27-August 1, 2025
ISBN (electronic)
  • 9798891762527
Event 63rd Annual Meeting of the Association for Computational Linguistics
Volume | Issue number 2
Pages (from-to) 238–255
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
There is an increasing trend towards evaluating NLP models with LLMs instead of human judgments, raising questions about the validity of these evaluations, as well as their reproducibility in the case of proprietary models. We provide JUDGE-BENCH, an extensible collection of 20 NLP datasets with human annotations covering a broad range of evaluated properties and types of data, and comprehensively evaluate 11 current LLMs, covering both open-weight and proprietary models, for their ability to replicate the annotations. Our evaluations show substantial variance across models and datasets. Models are reliable evaluators on some tasks, but overall display substantial variability depending on the property being evaluated, the expertise level of the human judges, and whether the language is human or model-generated. We conclude that LLMs should be carefully validated against human judgments before being used as evaluators.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2025.acl-short.20
Other links https://github.com/dmg-illc/JUDGE-BENCH
Downloads
2025.acl-short.20 (Final published version)
Permalink to this page
Back