Why is scaling up models of language evolution hard?

Open Access
Authors
  • I. van Rooij
  • M. Blokpoel
Publication date 2021
Book title 43rd Annual Meeting of the Cognitive Science Society (CogSci 2021)
Book subtitle Comparative Cognition Animal Minds : Vienna, Austria, 26-29 July 2021
ISBN
  • 9781713835257
Series Proceedings of the Annual Meeting of the Cognitive Science Society
Event 43rd Annual Meeting of the Cognitive Science Society
Volume | Issue number 1
Pages (from-to) 209-215
Publisher Cognitive Science Society
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
Computational model simulations have been very fruitful for gaining insight into how the systematic structure we observe in the world's natural languages could have emerged through cultural evolution. However, these model simulations operate on a toy scale compared to the size of actual human vocabularies, due to the prohibitive computational resource demands that simulations with larger lexicons would pose. Using computational complexity analysis, we show that this is not an implementational artifact, but instead it reflects a deeper theoretical issue: these models are (in their current formulation) computationally intractable. This has important theoretical implications, because it means that there is no way of knowing whether or not the properties and regularities observed for the toy models would scale up. All is not lost however, because awareness of intractability allows us to face the issue of scaling head-on, and can guide the development of our theories.
Document type Conference contribution
Language English
Published at https://escholarship.org/uc/item/021734q4
Other links https://www.proceedings.com/60274.html
Downloads
qt021734q4 (Final published version)
Permalink to this page
Back