From pixels to perceptions: Capturing high-level abstract concepts in visual user-generated content

Open Access
Authors
Publication date 11-2024
Journal International Journal of Information Management Data Insights
Article number 100269
Volume | Issue number 4 | 2
Number of pages 23
Organisations
  • Faculty of Science (FNWI)
  • Faculty of Economics and Business (FEB) - Amsterdam Business School Research Institute (ABS-RI)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Visual content on platforms like Instagram and TripAdvisor plays a crucial role in consumer decision-making, particularly in intangible sectors like hospitality, where consumers rely on such information to gauge service quality prior to consumption. In this study, we introduce an approach leveraging deep neural networks to identify high-level, abstract concepts in visual user-generated content (UGC) for restaurants. Given the lack of annotations on these concepts, we propose two weak labeling methods: one utilizing existing restaurant-quality signals, and the other extracting relevant labels from review texts via questionnaires. Our findings reveal that models trained on these inexpensive weak labels demonstrate moderate correlations with human judgments. Furthermore, we showcase the effectiveness of gradient-based techniques in generating visual explanations, highlighting image regions that support neural network predictions. These methods enable visual UGC analysis tasks with minimal labeling effort and allow practitioners to interpret deep neural network predictions more effectively.
Document type Article
Language English
Published at https://doi.org/10.1016/j.jjimei.2024.100269
Downloads
1-s2.0-S2667096824000582-main (Final published version)
Permalink to this page
Back