Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

Open Access
Authors
Publication date 2018
Host editors
  • T. Linzen
  • G. ChrupaƂa
  • A. Alishahi
Book title The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Book subtitle EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium
ISBN (electronic)
  • 9781948087711
Event 2018 EMNLP Workshop BlackboxNLP
Pages (from-to) 222-231
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguistics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/W18-5424
Downloads
W18-5424 (Final published version)
Permalink to this page
Back