Neural Character-based Composition Models for Abuse Detection

Open Access
Authors
Publication date 2018
Host editors
  • D. FiĊĦer
  • R. Huang
  • V. Prabhakaran
  • R. Voigt
  • Z. Waseem
  • J. Wernimont
Book title Second Workshop on Abusive Language Online
Book subtitle EMNLP 2018 : proceedings of the workshop, co-located with EMNLP 2018 : October 31, 2018, Brussels, Belgium
ISBN (electronic)
  • 9781948087681
Event 2nd Workshop on Abusive Language Online
Pages (from-to) 1-10
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
The advent of social media in recent years has fed into some highly undesirable phenomena such as proliferation of offensive language, hate speech, sexist remarks, etc. on the Internet. In light of this, there have been several efforts to automate the detection and moderation of such abusive content. However, deliberate obfuscation of words by users to evade detection poses a serious challenge to the effectiveness of these efforts. The current state of the art approaches to abusive language detection, based on recurrent neural networks, do not explicitly address this problem and resort to a generic OOV (out of vocabulary) embedding for unseen words. However, in using a single embedding for all unseen words we lose the ability to distinguish between obfuscated and non-obfuscated or rare words. In this paper, we address this problem by designing a model that can compose embeddings for unseen words. We experimentally demonstrate that our approach significantly advances the current state of the art in abuse detection on datasets from two different domains, namely Twitter and Wikipedia talk page.
Document type Conference contribution
Language English
Published at https://doi.org/10.18653/v1/W18-5101
Downloads
W18-5101 (Final published version)
Permalink to this page
Back