Recent deep learning-based brain tumor segmentation models using multi-modality magnetic resonance imaging: a prospective survey

Front Bioeng Biotechnol. 2024 Jul 22:12:1392807. doi: 10.3389/fbioe.2024.1392807. eCollection 2024.

Abstract

Radiologists encounter significant challenges when segmenting and determining brain tumors in patients because this information assists in treatment planning. The utilization of artificial intelligence (AI), especially deep learning (DL), has emerged as a useful tool in healthcare, aiding radiologists in their diagnostic processes. This empowers radiologists to understand the biology of tumors better and provide personalized care to patients with brain tumors. The segmentation of brain tumors using multi-modal magnetic resonance imaging (MRI) images has received considerable attention. In this survey, we first discuss multi-modal and available magnetic resonance imaging modalities and their properties. Subsequently, we discuss the most recent DL-based models for brain tumor segmentation using multi-modal MRI. We divide this section into three parts based on the architecture: the first is for models that use the backbone of convolutional neural networks (CNN), the second is for vision transformer-based models, and the third is for hybrid models that use both convolutional neural networks and transformer in the architecture. In addition, in-depth statistical analysis is performed of the recent publication, frequently used datasets, and evaluation metrics for segmentation tasks. Finally, open research challenges are identified and suggested promising future directions for brain tumor segmentation to improve diagnostic accuracy and treatment outcomes for patients with brain tumors. This aligns with public health goals to use health technologies for better healthcare delivery and population health management.

Keywords: brain tumor segmentation; convolutional neural network; deep learning; medical images; multi-modality analysis; vision transformers.

Publication types

  • Review

Grants and funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This research was also supported by the MISP (Ministry of Science, ICT and Future Planning), Korea, under the National Program for Excellence in SW) (2019-0-01880) supervised by the IITP (Institute of Information and communications Technology Planning and Evaluation) (2019-0-01880).