Guided Dialogue Policy Learning without Adversarial Learning in the Loop

Open Access
Authors
  • Z. Li
  • S. Lee
  • B. Peng
  • J. Li
Publication date 2020
Host editors
  • T. Cohn
  • Y. He
  • Y. Liu
Book title Findings of the Association for Computational Linguistics. Findings of ACL: EMNLP 2020
Book subtitle 16-20 November, 2020
ISBN (electronic)
  • 9781952148903
Event 2020 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 2308–2317
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Reinforcement learning methods have emerged as a popular choice for training an efficient and effective dialogue policy. However, these methods suffer from sparse and unstable reward signals returned by a user simulator only when a dialogue finishes. Besides, the reward signal is manually designed by human experts, which requires domain knowledge. Recently, a number of adversarial learning methods have been proposed to learn the reward function together with the dialogue policy. However, to alternatively update the dialogue policy and the reward model on the fly, we are limited to policy-gradient-based algorithms, such as REINFORCE and PPO. Moreover, the alternating training of a dialogue agent and the reward model can easily get stuck in local optima or result in mode collapse. To overcome the listed issues, we propose to decompose the adversarial training into two steps. First, we train the discriminator with an auxiliary dialogue generator and then incorporate a derived reward model into a common reinforcement learning method to guide the dialogue policy learning. This approach is applicable to both on-policy and off-policy reinforcement learning methods. Based on our extensive experimentation, we can conclude the proposed method: (1) achieves a remarkable task success rate using both on-policy and off-policy reinforcement learning methods; and (2) has potential to transfer knowledge from existing domains to a new domain.
Document type Conference contribution
Note Volume comprises papers selected from those submitted to EMNLP 2020 which were not selected to appear at the main conference.
Language English
Published at https://doi.org/10.18653/v1/2020.findings-emnlp.209
Downloads
2020.findings-emnlp.209 (Final published version)
Permalink to this page
Back