Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection
| Authors | |
|---|---|
| Publication date | 03-2019 |
| Journal | Computational Brain & Behavior |
| Volume | Issue number | 2 | 1 |
| Pages (from-to) | 1-11 |
| Organisations |
|
| Abstract |
Cross-validation (CV) is increasingly popular as a generic method to adjudicate between mathematical models of cognition and behavior. In order to measure model generalizability, CV quantifies out-of-sample predictive performance, and the CV preference goes to the model that predicted the out-of-sample data best. The advantages of CV include theoretic simplicity and practical feasibility. Despite its prominence, however, the limitations of CV are often underappreciated. Here, we demonstrate the limitations of a particular form of CV—Bayesian leave-one-out cross-validation or LOO—with three concrete examples. In each example, a data set of infinite size is perfectly in line with the predictions of a simple model (i.e., a general law or invariance). Nevertheless, LOO shows bounded and relatively modest support for the simple model. We conclude that CV is not a panacea for model selection.
|
| Document type | Article |
| Note | In special issue: Leave-one-out Cross-Validation, and Issues in Practical Model Selection. |
| Language | English |
| Published at | https://doi.org/10.1007/s42113-018-0011-7 |
| Downloads |
Gronau-Wagenmakers2019_Article_LimitationsOfBayesianLeave-One
(Final published version)
|
| Permalink to this page | |