Optimization, games and generalization bounds

Open Access
Authors
Supervisors
Cosupervisors
Award date 09-09-2024
Number of pages 190
Organisations
  • Faculty of Science (FNWI) - Korteweg-de Vries Institute for Mathematics (KdVI)
Abstract
This thesis is about various aspects of theoretical machine learning. In particular, about optimization, game theory, and generalization bounds. Therefore, this thesis is structured in three parts: Part I is about optimization in machine learning. More specifically, we show new regret bounds for online convex optimization problems which fall between stochastic and adversarial learning problems. Furthermore, we provide new insights into the behavior of various first-order algorithms on time-varying variational inequalities. These results relate to dynamic regret bounds for strongly convex optimization problems and equilibrium tracking guarantees for time-varying games. Hence, these results are also related to Part III, which is about the game-theoretical aspects of machine learning. In this part, we show the first non-trivial lower bound on the query complexity for the computation of Nash equilibria in zero-sum games. Furthermore, we introduce an online feasible point method for generalized Nash-equilibrium problems. For a subclass of generalized games, we show that the convergence to the generalized Nash equilibrium is guaranteed while preserving feasibility for all iterates. In Part II we show algorithm and data-dependent generalization guarantees. This is done via a new definition of an algorithm-dependent Rademacher complexity. Furthermore, we derive geometrically interpretable bounds in terms of the fractal dimension of the output set of an algorithm.
Document type PhD thesis
Language English
Downloads
Permalink to this page
cover
Back