BanditSum: Extractive Summarization as a Contextual Bandit

Open Access
Authors
  • J.C.K. Cheung
Publication date 2018
Host editors
  • E. Riloff
  • D. Chiang
  • J. Hockenmaier
  • J. Tsujii
Book title Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing : EMNLP 2018
Book subtitle Brussels, Belgium, Oct. 31-Nov. 4
ISBN (electronic)
  • 9781948087841
Event 2018 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 3739-3748
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
In this work, we propose a novel method for training neural networks to perform single-document extractive summarization without heuristically-generated extractive labels. We call our approach BanditSum as it treats extractive summarization as a contextual bandit (CB) problem, where the model receives a document to summarize (the context), and chooses a sequence of sentences to include in the summary (the action). A policy gradient reinforcement learning algorithm is used to train the model to select sequences of sentences that maximize ROUGE score. We perform a series of experiments demonstrating that BanditSum is able to achieve ROUGE scores that are better than or comparable to the state-of-the-art for extractive summarization, and converges using significantly fewer update steps than competing approaches. In addition, we show empirically that BanditSum performs significantly better than competing approaches when good summary sentences appear late in the source document.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/D18-1409
Other links https://vimeo.com/306160623
Downloads
None (Final published version)
BanditSum suppl. 1 (Other version)
Permalink to this page
Back