Are LLMs tools to understand human neurocognition during abstract reasoning?

Open Access
Authors
Publication date 08-2024
Event 2024 Conference on Cognitive Computational Neuroscience
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Psychology Research Institute (PsyRes)
Abstract
Abstract reasoning, a key component of human intelligence, seems to have recently emerged in large language models (LLMs). If so, LLMs could help us pro- vide a mechanistic explanation for the brain processes behind the abstract reasoning abilities of humans. In this study, we compared the performance of multiple LLMs to human performance in a visual abstract reasoning task. We found that while most LLMs cannot perform this task as well as human participants, some LLMs are com- petent enough for use as potential descriptive models. We propose that the best-performing LLMs can be used as models to understand human performance, response times, and the timing of Event-Related Potentials (ERPs) as recorded by electroencephalography (EEG) during the task. We show initial behavioral and ERP results, and present our plan to compare LLM embeddings and surprisal measures to cortical activity patterns. This is the first step in a larger project to create neurally-informed artificial networks as tools to understand human neurocognition.
Document type Paper
Language English
Published at https://2024.ccneuro.org/poster/?id=380
Downloads
Permalink to this page
Back