Learnability and Semantic Universals

Open Access
Authors
Publication date 2019
Journal Semantics and Pragmatics
Article number 4
Volume | Issue number 12
Number of pages 35
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals.
Document type Article
Language English
Published at https://doi.org/10.3765/sp.12.4
Published at https://semanticsarchive.net/Archive/mQ2Y2Y2Z/LearnabilitySemanticUniversals.pdf
Downloads
3357-8973-1-PB (Final published version)
Permalink to this page
Back