Early-Exit Neural Networks with Nested Prediction Sets

Open Access
Authors
Publication date 2024
Journal Proceedings of Machine Learning Research
Event 40th Conference on Uncertainty in Artificial Intelligence, UAI 2024
Volume | Issue number 244
Pages (from-to) 1780-1796
Organisations
  • Faculty of Science (FNWI) - Korteweg-de Vries Institute for Mathematics (KdVI)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Early-exit neural networks (EENNs) facilitate adaptive inference by producing predictions at multiple stages of the forward pass. In safety-critical applications, these predictions are only meaningful when complemented with reliable uncertainty estimates. Yet, due to their sequential structure, an EENN’s uncertainty estimates should also be *consistent*: labels that are deemed improbable at one exit should not reappear within the confidence interval / set of later exits. We show that standard uncertainty quantification techniques, like Bayesian methods or conformal prediction, can lead to inconsistency across exits. We address this problem by applying anytime-valid confidence sequences (AVCSs) to the exits of EENNs. By design, AVCSs maintain consistency across exits. We examine the theoretical and practical challenges of applying AVCSs to EENNs and empirically validate our approach on both regression and classification tasks.
Document type Article
Note Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence : 15-19 July 2024, Universitat Pompeu Fabra, Barcelona, Spain
Language English
Published at https://proceedings.mlr.press/v244/jazbec24a.html
Other links https://github.com/metodj/EENN-AVCS
Downloads
jazbec24a (Final published version)
Permalink to this page
Back