DI-UMONS : Dépôt institutionnel de l’université de Mons

Recherche transversale
(titres de publication, de périodique et noms de colloque inclus)
2020-08-19 - Article/Dans un journal avec peer-review - Anglais - 10 page(s) (Soumise)

Brousmiche Mathilde , Rouat Jean, Dupont Stéphane , "Multi-level Attention Fusion Network for Audio-visual Event Recognition" in Information Fusion

  • Edition : Elsevier Science
  • Codes CREF : Intelligence artificielle (DI1180)
  • Unités de recherche UMONS : Information, Signal et Intelligence artificielle (F105)
  • Instituts UMONS : Institut NUMEDIART pour les Technologies des Arts Numériques (Numédiart)
Texte intégral :

Abstract(s) :

(Anglais) Event classification is inherently sequential and multimodal. Therefore, deep neural models need to dynamically focus on the most relevant time window and/or modality of a video. In this study, we propose the Multi-level Attention Fusion network (MAFnet), an architecture that can dynamically fuse visual and audio information for event recognition. Inspired by prior studies in neuroscience, we couple both modalities at different levels of visual and audio paths. Furthermore, the network dynamically highlights a modality at a given time window relevant to classify events. Experimental results in AVE (Audio-Visual Event), UCF51, and Kinetics-Sounds datasets show that the approach can effectively improve the accuracy in audio-visual event classification. Code is available at: https://github.com/numediart/MAFnet

Mots-clés :
  • (Anglais) Audio-visual fusion
  • (Anglais) Modality conditioning
  • (Anglais) Event recognition
  • (Anglais) Attention
  • (Anglais) Multimodal deep learning