Towards a Robuster Interpretive Parsing: learning from overt forms in Optimality Theory

Open Access
Authors
Publication date 2013
Journal Journal of Logic, Language and Information
Volume | Issue number 22 | 2
Pages (from-to) 139-172
Number of pages 34
Organisations
  • Faculty of Humanities (FGw) - Amsterdam Institute for Humanities Research (AIHR) - Amsterdam Center for Language and Communication (ACLC)
Abstract
The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this problem. We generalize RIP and suggest replacing the winner candidate with a weighted mean violation of the potential winner candidates. A Boltzmann distribution is introduced on the winner set, and the distribution's parameter T is gradually decreased. Finally, we show that GRIP, the Generalized Robust Interpretive Parsing Algorithm signiffcantly improves the learning success rate in a model with standard constraints for metrical stress assignment.
Document type Article
Language English
Published at https://doi.org/10.1007/s10849-013-9172-x
Downloads
Biro-JoLLI_2013-offprint.pdf (Final published version)
Permalink to this page
Back