Incremental Sentence Processing Mechanisms in Autoregressive Transformer Language Models

Open Access
Authors
Publication date 2025
Host editors
  • Luis Chiruzzo
  • Alan Ritter
  • Lu Wang
Book title Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: proceedings of the conference
Book subtitle NAACL 2025 : April 29-May 4, 2025
ISBN (electronic)
  • 9798891761896
Event 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics
Volume | Issue number 1
Pages (from-to) 3181–3203
Number of pages 23
Publisher Kerrville, TX: Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Autoregressive transformer language models (LMs) possess strong syntactic abilities, often successfully handling phenomena from agreement to NPI licensing. However, the features they use to incrementally process language inputs are not well understood. In this paper, we fill this gap by studying the mechanisms underlying garden path sentence processing in LMs. We ask: (1) Do LMs use syntactic features or shallow heuristics to perform incremental sentence processing? (2) Do LMs represent only one potential interpretation, or multiple? and (3) Do LMs reanalyze or repair their initial incorrect representations? To address these questions, we use sparse autoencoders to identify interpretable features that determine which continuation - and thus which reading - of a garden path sentence the LM prefers. We find that while many important features relate to syntactic structure, some reflect syntactically irrelevant heuristics. Moreover, while most active features correspond to one reading of the sentence, some features correspond to the other, suggesting that LMs assign weight to both possibilities simultaneously. Finally, LMs do not re-use features from garden path sentence processing to answer follow-up questions.
Document type Conference contribution
Language English
Published at https://doi.org/10.48550/arXiv.2412.05353 https://doi.org/10.18653/v1/2025.naacl-long.164
Downloads
2025.naacl-long.164 (Final published version)
Permalink to this page
Back