Amortized inference in inverse problems
| Authors | |
|---|---|
| Supervisors | |
| Cosupervisors |
|
| Award date | 23-03-2023 |
| Number of pages | 129 |
| Organisations |
|
| Abstract |
At the root of scientific discovery is the question of how to make sense of the world from empirical data. In practice, this question concerns how we can identify causal factors of a generative process from which we can only make noisy or limited observations. The task of finding these causal factors is called an inverse problem. We can find inverse problems in all scientific disciples ranging from physics, chemistry, and biology to medicine, psychology, and economics. Traditionally, inverse problems are tackled using methods from applied mathematics. Inherent to this tradition is the assumption of a rigorous treatment of those problems. While mathematical rigor appears elegant and desirable at first, it often comes with simplified assumptions and expensive computation.
In recent years, machine learning has found applications in many computational fields due to the new dawn of artificial neural networks (ANNs). Central to this success is the idea that large sets of training data and automated non-linear feature extraction are a much more expressive approach to many problems than hand-designed features and algorithms. In this thesis Amortised inference in inverse problems, we present an approach that aims to leverage the success of machine learning, and deep learning in particular, for applications to inverse problems. The approach, which we call Recurrent Inference Machines (RIMs), is a general purpose framework for solving inverse problems. RIMs are parametric models that perform recurrent updates, modeling an iterative algorithm's structure. Throughout this work, we demonstrate that RIMs have found applications in various scientific disciplines such as medicine, astronomy, and seismology. We dedicate large parts of this thesis to applying RIMs to accelerated MRI in particular, a problem that aims to reduce measurement times in Magnetic Resonance Imaging significantly. We further propose Invertible Recurrent Inference Machines (i-RIMs), as an evolution of RIMs. i-RIMs address the memory issue of training models on large-scale data by using invertibility to train models with back-propagation with constant memory. Given the current hardware constraints and data size, this allows us to build more expressive i-RIM models. Using an i-RIM, we won the single-coil track of the first fastMRI challenge. Here, we also demonstrate the steps that lead us to win the fastMRI challenge. |
| Document type | PhD thesis |
| Language | English |
| Downloads | |
| Permalink to this page | |