Learning to Learn with Variational Information Bottleneck for Domain Generalization

Open Access
Authors
  • Y. Du
  • J. Xu
  • H. Xiong
  • Q. Qiu
Publication date 2020
Host editors
  • A. Vedaldi
  • H. Bischof
  • T. Brox
  • J.M. Frahm
Book title Computer Vision – ECCV 2020
Book subtitle 16th European Conference, Glasgow, UK, August 23–28, 2020 : proceedings
ISBN
  • 9783030586065
ISBN (electronic)
  • 9783030586072
Series Lecture Notes in Computer Science
Event 16th European Conference on Computer Vision
Volume | Issue number X
Pages (from-to) 200-216
Publisher Cham: Springer
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift. In this paper, we address both problems. We introduce a probabilistic meta-learning model for domain generalization, in which classifier parameters shared across domains are modeled as distributions. This enables better handling of prediction uncertainty on unseen domains. To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB. MetaVIB is derived from novel variational bounds of mutual information, by leveraging the meta-learning setting of domain generalization. Through episodic training, MetaVIB learns to gradually narrow domain gaps to establish domain-invariant representations, while simultaneously maximizing prediction accuracy. We conduct experiments on three benchmarks for cross-domain visual recognition. Comprehensive ablation studies validate the benefits of MetaVIB for domain generalization. The comparison results demonstrate our method outperforms previous approaches consistently.
Document type Conference contribution
Note With supplementary material.
Language English
Published at https://doi.org/10.1007/978-3-030-58607-2_12
Downloads
Supplementary materials
Permalink to this page
Back