Do large language models solve verbal analogies like children do?
| Authors | |
|---|---|
| Publication date | 2025 |
| Host editors |
|
| Book title | The 29th Conference on Computational Natural Language Learning (CoNLL 2025) : Proceedings of the Conference |
| Book subtitle | CoNLL 2025 : July 31-August 1, 2025 |
| ISBN (electronic) |
|
| Event | 29th Conference on Computational Natural Language Learning |
| Pages (from-to) | 627-639 |
| Publisher | Kerrville, TX: Association for Computational Linguistics |
| Organisations |
|
| Abstract |
Analogy-making lies at the heart of human cognition. Adults solve analogies such as horse belongs to stable like chicken belongs to …? by mapping relations (kept in) and answering chicken coop. In contrast, young children often use association, e.g., answering egg. This paper investigates whether large language models (LLMs) solve verbal analogies in A:B::C:? form using associations, similar to what children do. We use verbal analogies extracted from an online learning environment, where 14,006 7-12 year-olds from the Netherlands solved 872 analogies in Dutch. The eight tested LLMs performed at or above the level of children, with some models approaching adult performance estimates. However, when we control for solving by association this picture changes. We conclude that the LLMs we tested rely heavily on association like young children do. However, LLMs make different errors than children, and association doesn’t fully explain their superior performance on this children’s verbal analogy task. Future work will investigate whether LLMs associations and errors are more similar to adult relational reasoning.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.48550/arXiv.2310.20384 https://doi.org/10.18653/v1/2025.conll-1.40 |
| Downloads |
2025.conll-1.40
(Final published version)
|
| Permalink to this page | |
