Quantifying Attention Flow in Transformers

Open Access
Authors
Publication date 2020
Host editors
  • D. Jurafsky
  • J. Chai
  • N. Schluter
  • J. Tetreault
Book title The 58th Annual Meeting of the Association for Computational Linguistics
Book subtitle ACL 2020 : Proceedings of the Conference : July 5-10, 2020
ISBN (electronic)
  • 9781952148255
Event 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020
Pages (from-to) 4190-4197
Number of pages 8
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
In the Transformer model, “self-attention” combines information from attended embeddings into the representation of the focal embedding in the next layer. Thus, across layers of the Transformer, information originating from different tokens gets increasingly mixed. This makes attention weights unreliable as explanations probes. In this paper, we consider the problem of quantifying this flow of information through self-attention. We propose two methods for approximating the attention to input tokens given attention weights, attention rollout and attention flow, as post hoc methods when we use attention weights as the relative relevance of the input tokens. We show that these methods give complementary views on the flow of information, and compared to raw attention, both yield higher correlations with importance scores of input tokens obtained using an ablation method and input gradients.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/2020.acl-main.385
Other links https://github.com/samiraabnar/attention_flow https://slideslive.com/38928943/quantifying-attention-flow-in-transformers
Downloads
2020.acl-main.385 (Final published version)
Permalink to this page
Back