Invariance in deep representations

Open Access
Authors
Supervisors
Cosupervisors
Award date 14-10-2022
Number of pages 188
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
In this thesis, Invariance in Deep Representations, we propose novel solutions to the problem of learning invariant representations. We adopt two distinct notions of invariance. One is rooted in symmetry groups and the other in causality. Last, despite being developed independently from each other, we aim to take a first step towards unifying the two notions of invariance. The thesis consists of four main sections where: (i) We propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. We develop a novel approach for set classification. (ii) We demonstrate that causal concepts can be used to explain the success of data augmentation by describing how they can weaken the spurious correlation between the observed domains and the task labels. We demonstrate that data augmentation can serve as a tool for simulating interventional data. (iii) We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders with a single latent confounder that lives in the same space as the treatment variable without changing the observational and interventional distributions entailed by the causal model. After the reduction, we parameterize the reduced causal model using a flexible class of transformations, so-called normalizing flows. (iv) We propose the Domain Invariant Variational Autoencoder, a generative model that tackles the problem of domain shifts by learning three independent latent subspaces, one for the domain, one for the class, and one for any residual variations.
Document type PhD thesis
Language English
Downloads
Permalink to this page
cover
Back