Soulaimane Guedria - Une plateforme d'apprentissage profond à base de composants qui passe à l'échelle : une application aux réseaux de neurones convolutionnels pour la segmentation en imagerie médicale

14:00
Wednesday
8
Jul
2020
Organized by: 
Soulaimane Guedria
Speaker: 
Soulaimane Guedria
Teams: 

 

La présentation se fera en français et aura lieu le Mercredi 8 juillet à 14h00 dans la salle 447 du Bâtiment IMAG-UGA (dans la limite des places autorisées)

Membres du jury :

  • Lionel Seinturier, professeur, Université de Lille, rapporteur
  • Alain Tchana, professeur, Université de Lyon, rapporteur
  • Philippe Sabatier, professeur, Université de Lyon, examinateur
  • François Esteve, professeur - Praticien Hospitalier, Université Grenoble Alpes, examinateur
  • Noël De Palma, professeur, Université Grenoble Alpes, directeur de thèse
  • Nicolas Vuillerme, maître de conférences, Université Grenoble Alpes, co-directeur de thèse

 

Deep neural networks (DNNs) and particularly convolutional neural networks (CNNs) trained on large datasets are getting great success across a plethora of paramount applications. It has been providing powerful solutions and revolutionizing medicine, particularly, in the medical image analysis field. However, deep learning field comes up with multiple challenges: (1) training Convolutional Neural Networks (CNNs) is a computationally intensive and time-consuming task (2) introducing parallelism to CNNs in practice as it is a tedious, repetitive and error-prone process and (3) there is currently no broad study of the generalizability and the reproducibility of the CNN parallelism techniques on concrete medical imaging segmentation applications. 
Within this context, the present PhD thesis aims to tackle the aforementioned challenges. To achieve this goal, we conceived, implemented and validated an all-in-one scalable and component-based deep learning parallelism platform for medical imaging segmentation. First, we introduce R2D2, an end-to-end scalable deep learning toolkit for medical imaging segmentation. R2D2 proposes a set of new distributed versions of widely-used deep learning architectures (FCN and U-Net) in order to speed up building new distributive deep learning models and reduce the gap between researchers and talent-intensive deep learning. Next, this thesis also introduces Auto-CNNp, a component-based software framework to automate CNN parallelism throughout encapsulating and hiding typical CNNs parallelization routine tasks within a backbone structure while being extensible for user-specific customization. The evaluation results of our proposed automated component-based approach are promising. It shows that a significant speedup in the CNN parallelization task has been achieved to the detriment of a negligible framework execution time, compared to the manual parallelization strategy. 
The previously introduced couple of software solutions (R2D2 and Auto-CNNp) at our disposal led us to conduct a thorough and practical analysis of the generalizability of the CNN parallelism techniques to the imagining segmentation applications. Concurrently, we perform an in-depth literacy review aiming to identify the sources of variability and study reproducibility issues of deep learning training process for particular CNNs training configurations applied for medical imaging segmentation. We also draw a set of good practices recommendations aiming to alleviate the aforementioned reproducibility issues for medical imaging segmentation DNNs training process. 
Finally, we make a number of observations based on a broad analysis of the results of the already conducted CNN parallelism experimental study which led us to propose a guideline and recommendations for scaling up CNNs for segmentation applications. We succeeded to eliminate the accuracy loss with scale for the U-Net CNN architecture and alleviate the accuracy degradation for the FCN CNN architecture.