GRIP: Generating Interaction Poses Using Latent Consistency and Spatial Cues

Open Access
Authors
  • D. Ceylan
  • S. Pirk
  • M.J. Black
Publication date 2024
Book title 2024 International Conference in 3D Vision
Book subtitle 3DV 2024 : 18-21 March 2024, Davos, Switzerland : proceedings
ISBN
  • 9798350362466
ISBN (electronic)
  • 9798350362459
Event 11th International Conference on 3D Vision
Pages (from-to) 933-943
Publisher Piscataway, NJ: IEEE Computer Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Hands are dexterous and highly versatile manipulators that are central to how humans interact with objects and their environment. Consequently, modeling realistic hand- object interactions, including the subtle motion of individual fingers, is critical for applications in computer graphics, computer vision, and mixed reality. Prior work on capturing and modeling humans interacting with objects in 3D focuses on the body and object motion, often ignoring hand pose. In contrast, we introduce GRIP, a learning-based method that takes, as input, the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction. As a preliminary step before synthesizing the hand motion, we first use a network, ANet, to denoise the arm motion. Then, we leverage the spatio-temporal relationship between the body and the object to extract novel temporal interaction cues, and use them in a two-stage inference pipeline to generate the hand motion. In the first stage, we introduce a new approach to encourage motion temporal consistency in the latent space (LTC) and generate consistent interaction motions. In the second stage, GRIP generates refined hand poses to avoid hand-object penetrations. Given sequences of noisy body and object motion, GRIP “upgrades” them to include hand-object interaction. Quantitative experiments and perceptual studies demonstrate that GRIP outperforms baseline methods and generalizes to unseen objects and motions from different motion-capture datasets. Our models and code are available for research purposes at https://grip.is.tue.mpg.de.
Document type Conference contribution
Note With supplemental items.
Language English
Published at https://doi.org/10.48550/arXiv.2308.11617 https://doi.org/10.1109/3DV62453.2024.00064
Other links https://www.proceedings.com/74990.html https://grip.is.tue.mpg.de
Downloads
2308.11617v2 (Accepted author manuscript)
Supplementary materials
Permalink to this page
Back