DI-UMONS : Dépôt institutionnel de l’université de Mons

Recherche transversale
Rechercher
(titres de publication, de périodique et noms de colloque inclus)
2018-03-26 - Colloque/Présentation - poster - Anglais - 1 page(s)

Larhmam Mohamed , El Adoui Mohammed , Benjelloun Mohammed , "Evaluating Deep Features for Breast Cancer Response Prediction in MRI Based on GPU Learning" in AI & Deep Learning Conference, GTC 2018, NVIDIA, San jose, California, USA, 2018

  • Codes CREF : Techniques d'imagerie et traitement d'images (DI2770), Intelligence artificielle (DI1180), Imagerie médicale, radiologie, tomographie (DI3243)
  • Unités de recherche UMONS : Informatique (F114)
  • Instituts UMONS : Institut de Recherche en Technologies de l’Information et Sciences de l’Informatique (InforTech)

Abstract(s) :

(Anglais) The purpose of our research is to reduce the number of unnecessary chemotherapy sessions during breast cancer diagnosis. Indeed, currently, patients with breast cancer are subject to several chemotherapies, without knowing if they will respond positively or negatively. Which causes a great loss of time. In this work, we used a retrospective study of 40 patients with local breast cancer provided by our collaborating center of radiology Jules Bordet in Brussels. The database was obtained within a protocol approved by the ethical committee of the institute after reviewing our research objective and proposed methods. Our dataset includes the anatomical pathology as a standard reference of breast tumor response to chemotherapy. We propose a multi input neural network receiving the contrast MRI slices acquired before and after chemotherapy, and provide as output the response probability prediction. The poster describes an ongoing work which have been recently started. We plane to: • Improve the used architecture model to prevent overfitting • Compare efficiently of new CNN architecture such as DensNet, NasNet, ResNet • Use multi GPU to accelerate the training of AutoML Deep Learning Experimentation in this study is carried out on a Linux cluster node with 32 CPU cores using a single NVIDIA GeForce GTX 980 with 4GB memory. Using Keras 2 with Tensorflow backend.

Notes :
  • (Anglais) This work has been accepted by the reading committee of the GTC 2018 conference in California - USA