Exploiting saliency for object segmentation from image level labels

Open Access
Authors
  • S.J. Oh
  • R. Benenson
  • A. Khoreva
  • Z. Akata ORCID logo
  • M. Fritz
  • B. Schiele
Publication date 2017
Book title 30th IEEE Conference on Computer Vision and Pattern Recognition
Book subtitle CVPR 2017 : 21-26 July 2016, Honolulu, Hawaii : proceedings
ISBN
  • 9781538604588
ISBN (electronic)
  • 9781538604571
Event 2017 IEEE Conference on Computer Vision and Pattern Recognition
Pages (from-to) 5038-5047
Publisher Piscataway, NJ: IEEE
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
There have been remarkable improvements in the semantic labelling task in the recent years. However, the state of the art methods rely on large-scale pixel-level annotations. This paper studies the problem of training a pixel-wise semantic labeller network from image-level annotations of the present object classes. Recently, it has been shown that high quality seeds indicating discriminative object regions can be obtained from image-level labels. Without additional information, obtaining the full extent of the object is an inherently ill-posed problem due to co-occurrences. We propose using a saliency model as additional information and hereby exploit prior knowledge on the object extent and image statistics. We show how to combine both information sources in order to recover 80% of the fully supervised performance - which is the new state of the art in weakly supervised training for pixel-wise semantic labelling.
Document type Conference contribution
Language English
Published at https://doi.org/10.1109/CVPR.2017.535
Published at https://arxiv.org/abs/1701.08261
Downloads
1701.08261 (Accepted author manuscript)
Permalink to this page
Back