Interactive Exploration of Journalistic Video Footage through Multimodal Semantic Matching
| Authors |
|
|---|---|
| Publication date | 2019 |
| Book title | MM'19 |
| Book subtitle | proceedings of the 27th ACM Conference on Multimedia : October 21-25, 2019, Nice, France |
| ISBN (electronic) |
|
| Event | 27th ACM International Conference on Multimedia, MM 2019 |
| Pages (from-to) | 2196-2198 |
| Publisher | New York, NY: Association for Computing Machinery |
| Organisations |
|
| Abstract |
This demo presents a system for journalists to explore video footage for broadcasts. Daily news broadcasts contain multiple news items that consist of many video shots and searching for relevant footage is a labor intensive task. Without the need for annotated video shots, our system extracts semantics from footage and automatically matches these semantics to query terms from the journalist. The journalist can then indicate which aspects of the query term need to be emphasized, e.g. the title or its thematic meaning. The goal of this system is to support the journalists in their search process by encouraging interaction and exploration with the system.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.1145/3343031.3350597 |
| Permalink to this page | |