Rating written performance: What do raters do and why?
| Authors | |
|---|---|
| Publication date | 2014 |
| Journal | Language Testing |
| Volume | Issue number | 31 | 3 |
| Pages (from-to) | 329-348 |
| Number of pages | 20 |
| Organisations |
|
| Abstract |
This study investigates the relationship in L2 writing between raters’ judgments of communicative adequacy and linguistic complexity by means of six-point Likert scales, and general measures of linguistic performance. The participants were 39 learners of Italian and 32 of Dutch, who wrote two short argumentative essays. The same writing tasks were administered to a control group of 18 native writers of Italian and 17 of Dutch. During a panel discussion raters were asked to verbalize for which reasons they assigned a text to a particular rating level. The results show that although raters’ judgements of communicative adequacy largely corresponded to their judgments of linguistic complexity, the findings for L2 and L1 turned out to be different. In L2 overall ratings of linguistic complexity were correlated with lexical diversity and accuracy, but not with syntactic complexity. In L1 hardly any correlations between raters’ judgements and general measures of syntactic complexity and lexical diversity were found. Furthermore, raters used different strategies when assessing high- and low-proficiency L2 writers or native writers, and seemed to attach more importance to textual features connected to communicative adequacy than to linguistic complexity and accuracy.
|
| Document type | Article |
| Language | English |
| Published at | https://doi.org/10.1177/0265532214526174 |
| Permalink to this page | |
