VisualSem: a high-quality knowledge graph for vision and language

Open Access
Authors
  • H. Alberts
  • N.T. Huang
  • Y.R. Deshpande
  • Y. Liu
Publication date 2021
Host editors
  • D. Ataman
  • A. Birch
  • A. Conneau
  • O. Firat
  • S. Ruder
  • G.G. Sahin
Book title The 1st Workshop on Multilingual Representation Learning
Book subtitle MRL 2021 : proceedings of the conference : November 11, 2021
ISBN (electronic)
  • 9781954085961
Event 1st Workshop on Multilingual Representation Learning
Pages (from-to) 138-152
Number of pages 11
Publisher Stroudsburg, PA: The Association for Computational Linguistics
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
An exciting frontier in natural language understanding (NLU) and generation (NLG) calls for (vision-and-) language models that can efficiently access external structured knowledge repositories. However, many existing knowledge bases only cover limited domains, or suffer from noisy data, and most of all are typically hard to integrate into neural language pipelines. To fill this gap, we release VisualSem: a high-quality knowledge graph (KG) which includes nodes with multilingual glosses, multiple illustrative images, and visually relevant relations. We also release a neural multi-modal retrieval model that can use images or sentences as inputs and retrieves entities in the KG. This multi-modal retrieval model can be integrated into any (neural network) model pipeline. We encourage the research community to use VisualSem for data augmentation and/or as a source of grounding, among other possible uses. VisualSem as well as the multi-modal retrieval models are publicly available and can be downloaded in this URL: https://github.com/iacercalixto/visualsem.
Document type Conference contribution
Note With supplementary video
Language English
Related dataset VisualSem
Published at https://doi.org/10.18653/v1/2021.mrl-1.13
Other links https://github.com/iacercalixto/visualsem
Downloads
2021.mrl-1.13 (Final published version)
Supplementary materials
Permalink to this page
Back