Empirical tests of the Gradual Learning Algorithm
| Authors |
|
|---|---|
| Publication date | 1999 |
| Journal | Rutgers Optimality Archive |
| Volume | Issue number | 348 |
| Number of pages | 37 |
| Organisations |
|
| Abstract |
The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and Smolensky (1993, 1996, 1998), which initiated the learnability research program for Optimality Theory. We argue that the Gradual Learning Algorithm has a number of special advantages: it can learn free variation, avoid failure when confronted with noisy learning data, and account for gradient well-formedness judgments. The case studies we examine involve Ilokano reduplication and metathesis, Finnish genitive plurals, and the distribution of English light and dark /l/.
|
| Document type | Article |
| Note | September 29, 1999 |
| Language | English |
| Published at | http://roa.rutgers.edu/article/view/358 |
| Permalink to this page | |