Sequential Tests for Large Scale Learning

Open Access
Authors
Publication date 2016
Journal Neural Computation
Volume | Issue number 28 | 1
Pages (from-to) 45-70
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
We argue that when faced with big data sets, learning and inference algorithms should compute updates using only subsets of data items. We introduce algorithms that use sequential hypothesis tests to adaptively select such a subset of data points. The statistical properties of this subsampling process can be used to control the efficiency and accuracy of learning or inference. In the context of learning by optimization, we test for the probability that the update direction is no more than 90 degrees in the wrong direction. In the context of posterior inference using Markov chain Monte Carlo, we test for the probability that our decision to accept or reject a sample is wrong. We experimentally evaluate our algorithms on a number of models and data sets.
Document type Article
Language English
Published at https://doi.org/10.1162/NECO_a_00226
Published at http://dx.doi.org/10.1162/NECO_a_00796
Downloads
seqHT_NC_accepted (Submitted manuscript)
Permalink to this page
Back