Functional representation learning for uncertainty quantification and fast skill transfer

Open Access
Authors
Supervisors
Cosupervisors
Award date 23-12-2022
ISBN
  • 9789083295114
Number of pages 157
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
This thesis aims at Functional Representation Learning for Uncertainty Quantification & Fast Skill Transfer. Real-world scenarios are posing increasing practical demands for deep learning models. Two particular considerations are uncertainty quantification and fast skill transfer. The first consideration has the potential to address risk-sensitive decision making or reduce sample complexity in query problems. The second consideration is to avoid learning from scratch and increase the adaptive capability of deep learning models.
A typical example can be the vanilla neural process (NP) (Garnelo et al., 2018b), where the approximate functional prior q_{\phi}(z|D_C) as the functional representation helps induce the predictive function distribution E_{q_{\phi}}(z|D_C) [p(y|z, x)]. Also, the vanilla NP as the functional representation model is the foundation of our developed models and algorithms in this thesis. A large body of our work is to incorporate structural inductive biases, such as hierarchical Bayes, the mixture of experts and graph modules, into functional representations. This thesis will also present a rethink of the optimization of vanilla NPs and propose a new method to bridge the inference gap.
Document type PhD thesis
Language English
Downloads
Permalink to this page
cover
Back