Learning Co-Speech Gesture Representations in Dialogue through Contrastive Learning: An Intrinsic Evaluation

Open Access
Authors
  • Esam Ghaleb
  • Bulat Khaertdinov
  • Wim Pouw
  • Marlou Rasenberg
Publication date 2024
Book title ICMI '24
Book subtitle Proceedings of the 26th International Conference on Multimodal Interaction : November 4-8, 2024, San José, Costa Rica
ISBN (electronic)
  • 9798400704628
Event 26th International Conference on Multimodal Interaction
Pages (from-to) 274-283
Publisher New York, New York: Association for Computing Machinery
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
In face-to-face dialogues, the form-meaning relationship of co-speech gestures varies depending on contextual factors such as what the gestures refer to and the individual characteristics of speakers. These factors make co-speech gesture representation learning challenging. How can we learn meaningful gestures representations considering gestures' variability and relationship with speech? This paper tackles this challenge by employing self-supervised contrastive learning techniques to learn gesture representations from skeletal and speech information. We propose an approach that includes both unimodal and multimodal pre-training to ground gesture representations in co-occurring speech. For training, we utilize a face-to-face dialogue dataset rich with representational iconic gestures. We conduct thorough intrinsic evaluations of the learned representations through comparison with human-annotated pairwise gesture similarity. Moreover, we perform a diagnostic probing analysis to assess the possibility of recovering interpretable gesture features from the learned representations. Our results show a significant positive correlation with human-annotated gesture similarity and reveal that the similarity between the learned representations is consistent with well-motivated patterns related to the dynamics of dialogue interaction. Moreover, our findings demonstrate that several features concerning the form of gestures can be recovered from the latent representations. Overall, this study shows that multimodal contrastive learning is a promising approach for learning gesture representations, which opens the door to using such representations in larger-scale gesture analysis studies.
Document type Conference contribution
Note With supplemental material
Language English
Published at https://doi.org/10.1145/3678957.3685707
Downloads
3678957.3685707 (Final published version)
Supplementary materials
Permalink to this page
Back