The Importance of Being Recurrent for Modeling Hierarchical Structure

Open Access
Authors
Publication date 2018
Host editors
  • E. Riloff
  • D. Chiang
  • J. Hockenmaier
  • J. Tsujii
Book title Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing : EMNLP 2018
Book subtitle Brussels, Belgium, Oct. 31-Nov. 4
ISBN (electronic)
  • 9781948087841
Event 2018 Conference on Empirical Methods in Natural Language Processing
Pages (from-to) 4731–4736
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Faculty of Science (FNWI)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/ ketranm/fan_vs_rnn
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/D18-1503
Published at https://staff.science.uva.nl/c.monz/html/publications/D18-1503.pdf
Other links https://vimeo.com/306155520 https://github.com/ ketranm/fan_vs_rnn
Downloads
D18-1503 (Final published version)
Permalink to this page
Back