Experiential Semantic Information and Brain Alignment: Are Multimodal Models Better than Language Models?

Open Access
Authors
Publication date 2025
Host editors
  • Gemma Boleda
  • Michael Roth
Book title The 29th Conference on Computational Natural Language Learning (CoNLL 2025) : Proceedings of the Conference
Book subtitle CoNLL 2025 : July 31-August 1, 2025
ISBN (electronic)
  • 9798891762718
Event 29th Conference on Computational Natural Language Learning
Pages (from-to) 141-155
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
A common assumption in Computational Linguistics is that text representations learnt by multimodal models are richer and more human-like than those by language-only models, as they are grounded in images or audio—similar to how human language is grounded in real-world experiences. However, empirical studies checking whether this is true are largely lacking. We address this gap by comparing word representations from contrastive multimodal models vs. language-only ones in the extent to which they capture experiential information—as defined by an existing norm-based ‘experiential model’—and align with human fMRI responses. Our results indicate that, surprisingly, language-only models are superior to multimodal ones in both respects. Additionally, they learn more unique brain-relevant semantic information beyond that shared with the experiential model. Overall, our study highlights the need to develop computational models that better integrate the complementary semantic information provided by multimodal data sources.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2025.conll-1.10
Downloads
2025.conll-1.10 (Final published version)
Permalink to this page
Back