Learning to search for images without annotations
| Authors |
|
|---|---|
| Supervisors | |
| Cosupervisors | |
| Award date | 02-11-2016 |
| ISBN |
|
| Number of pages | 119 |
| Organisations |
|
| Abstract |
Humans are adjusted to the environment and can easily recognize what they see around them or in images. Machines, however, cannot recognize images unless trained to do so. The usual approach is to annotate images with what they capture and train a machine learning algorithm. This thesis focuses on a different approach, to learn machines what is in an image by avoiding annotation. The presented methods avoid annotated text created by well-instructed human annotators or annotated examples all together. The goal is image search for concepts and scene categories. Tagged images from social media are investigated for concept detection, and object categories are exploited for recognizing scenes. Throughout extensive experiments, this thesis shows state-of-the-art performance on standard image datasets up to date. The most important contributions can be summarized as follows: 1) concept detectors can be learned from social media by carefully selecting training data, 2) rare social media tags are problematic and should be augmented with semantic knowledge, 3) when many object categories are available, scenes can be reasonably recognized in images and 4) the layout of objects, without their object identity, can help in discriminating scenes. To this end, the proposed methods and ideas can be beneficial when one is looking to search for images by avoiding annotations in the learning process.
|
| Document type | PhD thesis |
| Note | Research conducted at: Universiteit van Amsterdam |
| Language | English |
| Downloads | |
| Permalink to this page | |
