Measuring LLM Self-consistency Unknown Unknowns in Knowing Machines

Open Access
Authors
Publication date 10-2024
Journal Sociologica
Volume | Issue number 18 | 2
Pages (from-to) 25-65
Number of pages 41
Organisations
  • Faculty of Humanities (FGw) - Amsterdam Institute for Humanities Research (AIHR) - Amsterdam School for Cultural Analysis (ASCA)
Abstract
This essay critically examines some limitations and misconceptions of Large Language Models (LLMs) in relation to knowledge and self-knowledge, particularly in the context of social sciences and humanities (SSH) research. Using an experimental approach, we evaluate the self-consistency of LLM responses by introducing variations in prompts during knowledge retrieval tasks. Our results indicate that self-consistency tends to align with correct responses, yet errors persist, questioning the reliability of LLMs as “knowing” agents. Drawing on epistemological frameworks, we argue that LLMs exhibit the capacity to know only when random factors, or epistemic luck, can be excluded, yet they lack self-awareness of their inconsistencies. Whereas human ignorance often involves many “known unknowns”, LLMs exhibit a form of ignorance manifested through inconsistency, where the ignorance remains a complete “unknown unknown”. LLMs always “assume” they “know”. We repurpose these insights into a pedagogical experiment, encouraging SSH scholars and students to critically engage with LLMs in educational settings. We propose a hands-on approach based on critical technical practice, aiming to balance the practical utility with an informed understanding of their limitations. This approach equips researchers with the skills to use LLMs effectively while promoting a deeper understanding of their operational principles and epistemic constraints.
Document type Article
Language English
Published at https://doi.org/10.6092/issn.1971-8853/19488
Downloads
19488-jacomy (Final published version)
Permalink to this page
Back