DI-UMONS : Dépôt institutionnel de l’université de Mons

Recherche transversale
(titres de publication, de périodique et noms de colloque inclus)
2021-08-26 - Colloque/Présentation - communication orale - Anglais - 31 page(s)

Belabed Tarek , Quenon Alexandre , Ramos Gomes Da Silva Vitor , Valderrama Carlos , SOUANI Chokri, "Full Python Interface Control: Auto Generation And Adaptation of Deep Neural Networks For Edge Computing and IoT Applications FPGA-Based Acceleration" in International Symposium on INnovations in Intelligent SysTems and Applications, Kocaeli, Turkey, 2021

  • Codes CREF : Unités digitales de traitement (DI2561), Intelligence artificielle (DI1180), Technol. des composantes électroniques [microélectronique] (DI2521), Semi-conducteurs (DI2512), Circuits intégrés (DI2531), Logiciel d'application (DI2574), Informatique appliquée logiciel (DI2570)
  • Unités de recherche UMONS : Electronique et Microélectronique (F109)
  • Instituts UMONS : Institut de Recherche en Technologies de l’Information et Sciences de l’Informatique (InforTech), Institut NUMEDIART pour les Technologies des Arts Numériques (Numédiart)
Texte intégral :

Abstract(s) :

(Anglais) FPGAs are gaining popularity as the target of choice for the efficient implementation of Deep Neural Networks (DNNs) approaches. Modern SoCs with integrated FPGA shave low-power on-chip processors and sufficient interfaces to accommodate the most commonly deployed Internet of Things (IoT) devices. However, developing DNN hardware accelerators using integrated FPGAs remains a complicated task due to the complexity of reconfigurable computing and limited hardware resources on embedded devices. In addition, it is necessary to master High-Level Synthesis tools (HLS) and their hidden philosophy driving RTL design. This paper presents our Python framework to fully customize and automate the generation and deployment of FPGA-based DNN topologies for Edge Computing. Our framework environment, Jupyter Notebooks, allows users to customize their desired hardware DNN and its related applications on Xilinx’s Pynq boards. Subsequently, the framework automatically generates TCL (Tool Command Language) scripts driving HLS tools on the host server or cloud. Once the desired FPGA-based architecture is generated, the framework retrieves the bitstream to configure the FPGA. Therefore, the user can deploy this bitstream to accelerate any Python application that performs the same DNN model. The experimental results show that our framework can speed up 59.8× a 784-32-32-10 topology, while the power consumption is less than 0.266 W.

Notes :
  • (Anglais) Second Best Paper Award
Identifiants :
  • FNRS : framework

Mots-clés :
  • (Anglais) Cloud Computing
  • (Anglais) Edge Computing
  • (Anglais) DNN
  • (Anglais) IoT
  • (Anglais) FPGA
  • (Anglais) Python
  • (Anglais) framework