DI-UMONS : Dépôt institutionnel de l’université de Mons

Recherche transversale
Rechercher
(titres de publication, de périodique et noms de colloque inclus)
2021-09-06 - Article/Dans un journal avec peer-review - Anglais - 22 page(s)

Belabed Tarek , Ramos Gomes Da Silva Vitor , Quenon Alexandre , Valderrama Carlos , SOUANI Chokri, "A Novel Automate Python Edge-to-Edge: From Automate Generation On Cloud To User Application Deployment on Edge of Deep Neural Networks For Low Power IoT Systems FPGA-Based Acceleration" in Sensors

  • Edition : Multidisciplinary Digital Publishing Institute (MDPI) (Switzerland)
  • Codes CREF : Unités digitales de traitement (DI2561), Intelligence artificielle (DI1180), Technol. des composantes électroniques [microélectronique] (DI2521), Semi-conducteurs (DI2512), Circuits intégrés (DI2531), Logiciel d'application (DI2574), Informatique appliquée logiciel (DI2570)
  • Unités de recherche UMONS : Electronique et Microélectronique (F109)
  • Instituts UMONS : Institut de Recherche en Technologies de l’Information et Sciences de l’Informatique (InforTech), Institut NUMEDIART pour les Technologies des Arts Numériques (Numédiart)
Texte intégral :

Abstract(s) :

(Anglais) Deep Neural Networks (DNNs) deployment for IoT Edge applications requires strong skills in hardware and software. In this paper, a novel design framework fully automated for Edge applications is proposed to perform such a deployment on System-on-Chips. Based on a high-level Python interface that mimics the leading Deep Learning software frameworks, it offers an easy way to implement a hardware-accelerated DNN on an FPGA. To do this, our design methodology covers the three main phases: (a) customization, where the user specifies the optimizations needed on each DNN layer, (b) generation, the framework generates on the Cloud the necessary binaries for both FPGA and software parts, and (c) deployment, the SoC on the Edge receives the resulting files serving to program the FPGA and related Python libraries for user applications. Among the study cases, an optimized DNN for the MNIST database can speed up more than 60X a software version on the ZYNQ 7020 SoC and still consume less than 0.43W. A comparison with the state-of-the-art frameworks demonstrates that our methodology offers the best trade-off between throughput, power consumption, and system cost.

Notes :
  • (Anglais) Special Issue "Internet of Things, Sensing and Cloud Computing"
Identifiants :
  • ISSN : 1424-8220

Mots-clés :
  • (Anglais) Field Programmable Gate Array (FPGA)
  • (Anglais) Cloud Computing
  • (Anglais) Low-Cost
  • (Anglais) High-Level Synthesis (HLS) Tools
  • (Anglais) Edge Computing
  • (Anglais) Internet of Things (IoT)
  • (Anglais) Deep Neural Networks (DNNs)
  • (Anglais) Python Framework
  • (Anglais) Low-Power
  • (Anglais) Hardware Acceleration