Dialogue Generation: From Imitation Learning to Inverse Reinforcement Learning

Authors
Publication date 2019
Book title Thirty-Third AAAI Conference on Artificial Intelligence, Thirty-First Conference on Innovative Applications of Artificial Intelligence, The Ninth Symposium on Educational Advances in Artificial Intelligence
Book subtitle AAAI-19, IAAI-19, EAAI-20 : January 27-February 1, 2019, Hilton Hawaiian Village, Honolulu, Hawaii, USA
ISBN (electronic)
  • 9781577358091
Series Proceedings of the AAAI Conference on Artificial Intelligence
Event 33rd AAAI Conference on Artificial Intelligence
Pages (from-to) 6722-6729
Publisher Palo Alto, California: AAAI Press
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator. The reward signal from a poor discriminator can be very sparse and unstable, which may lead the generator to fall into a local optimum or to produce nonsense replies. To alleviate the first problem, we first extend a recently proposed adversarial dialogue generation method to an adversarial imitation learning solution. Then, in the framework of adversarial inverse reinforcement learning, we propose a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator training. We evaluate the performance of the resulting model with automatic metrics and human evaluations in two annotation settings. Our experimental results demonstrate that our model can generate more high-quality responses and achieve higher overall performance than the state-of-the-art.
Document type Conference contribution
Language English
Published at https://doi.org/10.1609/aaai.v33i01.33016722
Permalink to this page
Back