Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations

Open Access
Authors
Publication date 2012
Host editors
  • J. Langford
  • J. Pineau
Book title Proceedings of Twenty-Ninth International Conference Machine Learning. - Vol. 2
ISBN
  • 9781450312851
Event 29th International Conference Machine Learning (ICML2012)
Pages (from-to) 1743-1750
Publisher Madison, WI: International Machine Learning Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
This paper analyzes the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al, 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, Srinivas et al proved that the regret vanishes at the approximate rate of O(1/\sqrt{t}), where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymptotically according to O(e^{-τt/((ln t)^(d/4))}) with high probability. Here, d is the dimension of the search space and tau is a constant that depends on the behaviour of the objective function near its global maximum.
Document type Conference contribution
Language English
Published at http://www.icml.cc/2012/papers/853.pdf
Downloads
853 (Submitted manuscript)
Permalink to this page
Back