Unsupervised Multi-Feature Tag Relevance Learning for Social Image Retrieval

Open Access
Authors
Publication date 2010
Book title CIVR 2010: 2010 ACM International Conference on Image and Video Retrieval, at Xi'an, China, July 5-7, 2010
ISBN
  • 9781450301176
Event ACM International Conference on Image and Video Retrieval, 2010
Pages (from-to) 10-17
Publisher New York, NY: Association for Computing Machinery
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Interpreting the relevance of a user-contributed tag with respect to the visual content of an image is an emerging problem in social image retrieval. In the literature this problem is tackled by analyzing the correlation between tags and images represented by specific visual features. Unfortunately, no single feature represents the visual content completely, e.g., global features are suitable for capturing the gist of scenes, while local features are better for depicting objects. To solve the problem of learning tag relevance given multiple features, we introduce in this paper two simple and effective methods: one is based on the classical Borda Count and the other is a method we name UniformTagger. Both methods combine the output of many tag relevance learners driven by diverse features in an unsupervised, rather than supervised, manner.

Experiments on 3.5 million social-tagged images and two test sets verify our proposal. Using learned tag relevance as updated tag frequency for social image retrieval, both Borda Count and UniformTagger outperform retrieval without tag relevance learning and retrieval with single-feature tag relevance learning. Moreover, the two unsupervised methods are comparable to a state-of-the-art supervised alternative, but without the need of any training data.
Document type Conference contribution
Language English
Published at https://doi.org/10.1145/1816041.1816044
Downloads
LiICIVR2010 (Submitted manuscript)
Permalink to this page
Back