Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
| Authors | |
|---|---|
| Publication date | 2018 |
| Host editors |
|
| Book title | The 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP |
| Book subtitle | EMNLP 2018 : proceedings of the First Workshop : November 1, 2018, Brussels, Belgium |
| ISBN (electronic) |
|
| Event | 2018 EMNLP Workshop BlackboxNLP |
| Pages (from-to) | 222-231 |
| Publisher | Stroudsburg, PA: The Association for Computational Linguistics |
| Organisations |
|
| Abstract |
In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguistics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.18653/v1/W18-5424 |
| Downloads |
W18-5424
(Final published version)
|
| Permalink to this page | |