Beyond Local Nash Equilibria for Adversarial Networks

Authors
  • R. Groß
Publication date 2019
Host editors
  • M. Atzmueller
  • W. Duivesteijn
Book title Artificial Intelligence
Book subtitle 30th Benelux Conference, BNAIC 2018, ‘s-Hertogenbosch, The Netherlands, November 8–9, 2018 : revised selected papers
ISBN
  • 9783030319779
ISBN (electronic)
  • 9783030319786
Series Communications in Computer and Information Science
Event 30th Benelux Conference on Artificial Intelligence, BNAIC 2018
Pages (from-to) 73-89
Publisher Cham: Springer
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a ‘local Nash equilibrium’ (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or classifier. This paper proposes to model GANs explicitly as finite games in mixed strategies, thereby ensuring that every LNE is an NE. We use the Parallel Nash Memory as a solution method, which is proven to monotonically converge to a resource-bounded Nash equilibrium. We empirically demonstrate that our method is less prone to typical GAN problems such as mode collapse and produces solutions that are less exploitable than those produced by GANs and MGANs.
Document type Conference contribution
Language English
Published at https://doi.org/10.1007/978-3-030-31978-6_7
Permalink to this page
Back