DI-UMONS : Dépôt institutionnel de l’université de Mons

Recherche transversale
Rechercher
(titres de publication, de périodique et noms de colloque inclus)
2014-04-01 - Colloque/Article dans les actes avec comité de lecture - Anglais - 4 page(s)

Tilmanne Joëlle , D'alessandro Nicolas , Ravet Thierry , Astrinaki Maria, Moinet Alexis, "Full-Body Gait Reconstruction Using Covariance-Based Mapping Within a Realtime HMM-based Framework" in The 50th annual convention of the AISB, pp. 313-316, Goldsmiths, University of London, London, UK, 2014

  • Codes CREF : Sciences de l'ingénieur (DI2000), Technologies de l'information et de la communication (TIC) (DI4730)
  • Unités de recherche UMONS : Information, Signal et Intelligence artificielle (F105)
  • Instituts UMONS : Institut NUMEDIART pour les Technologies des Arts Numériques (Numédiart)
Texte intégral :

Abstract(s) :

(Anglais) In this paper we propose a new HMM-based framework for the exploration of realtime gesture-to-gesture mapping strategies. This framework enables the realtime HMM-based recognition of a given gesture sequence from a subset of its dimensions, the covariance-based mapping of the gesture stylistics from this subset onto the remaining dimensions and the realtime synthesis of the remaining dimensions from their corresponding HMMs. This idea has been embedded into a proof-of-concept prototype that “reconstructs” the lower-body dimensions of a walking sequence from the upper-body gestures in realtime. In order to achieve this reconstruction, we adapt various machine learning tools from the speech processing research. Notably we have adapted the HTK toolkit to motion capture data and modified MAGE, a HTS-based library for reactive speech synthesis, to accommodate our use case. We have also adapted a covariance-based mapping strategy used in the articulatory inversion process of silent speech interfaces to the case of transferring stylistic information from the upper- to the lower-body statistical models. The main achievement of this work is to show that this reconstruction process applies the inherent stylistics of the input gestures onto the synthesised motion thanks to the mapping function applied at the state level.