Query by Activity Video in the Wild

Open Access
Authors
Publication date 23-11-2023
Edition v1
Number of pages 6
Publisher ArXiv
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
This paper focuses on activity retrieval from a video query in an imbalanced scenario. In current query-by-activity-video literature, a common assumption is that all activities have sufficient labelled examples when learning an embedding. This assumption does however practically not hold, as only a portion of activities have many examples, while other activities are only described by few examples. In this paper, we propose a visual-semantic embedding network that explicitly deals with the imbalanced scenario for activity retrieval. Our network contains two novel modules. The visual alignment module performs a global alignment between the input video and fixed-sized visual bank representations for all activities. The semantic module performs an alignment between the input video and fixed-sized semantic activity representations. By matching videos with both visual and semantic activity representations that are of equal size over all activities, we no longer ignore infrequent activities during retrieval. Experiments on a new imbalanced activity retrieval benchmark show the effectiveness of our approach for all types of activities.
Document type Preprint
Note Extended version of paper accepted for ICIP 2023 but not presented
Language English
Published at https://doi.org/10.48550/arXiv.2311.13895
Other links https://doi.org/10.1109/ICIP49359.2023.10222796
Downloads
2311.13895v1-1 (Final published version)
Permalink to this page
Back