Few-Shot Transformation of Common Actions into Time and Space

Open Access
Authors
Publication date 2021
Book title Proceedings, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Book subtitle virtual, 9-25 June 2021
ISBN
  • 9781665445108
ISBN (electronic)
  • 9781665445092
Series CVPR
Event 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Pages (from-to) 16026-16035
Publisher Los Alamitos, California: Conference Publishing Services, IEEE Computer Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
This paper introduces the task of few-shot common action localization in time and space. Given a few trimmed support videos containing the same but unknown action, we strive for spatio-temporal localization of that action in a long untrimmed query video. We do not require any class labels, interval bounds, or bounding boxes. To address this challenging task, we introduce a novel few-shot transformer architecture with a dedicated encoder-decoder structure optimized for joint commonality learning and localization prediction, without the need for proposals. Experiments on re-organizations of the AVA and UCF101-24 datasets show the effectiveness of our approach for few-shot common action localization, even when the support videos are noisy. Although we are not specifically designed for common localization in time only, we also compare favorably against the few-shot and one-shot state-of-the-art in this setting. Lastly, we demonstrate that the few-shot transformer is easily extended to common action localization per pixel.
Document type Conference contribution
Note With supplementary file
Language English
Published at https://doi.org/10.48550/arXiv.2104.02439 https://doi.org/10.1109/CVPR46437.2021.01577
Published at https://openaccess.thecvf.com/content/CVPR2021/html/Yang_Few-Shot_Transformation_of_Common_Actions_Into_Time_and_Space_CVPR_2021_paper.html
Other links https://www.proceedings.com/60773.html
Downloads
2104.02439 (Accepted author manuscript)
Supplementary materials
Permalink to this page
Back