Deep Learning Techniques for Medical Image Analysis

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: 28 February 2025 | Viewed by 2244

Special Issue Editor


E-Mail Website
Guest Editor
Department of Biomedical Engineering, Faculty of Environment and Life, Bei**g University of Technology, Bei**g 100124, China
Interests: biomedical ultrasonics; quantitative ultrasound for biological tissue characterization; ultrasound wave propagation in biological tissues; medical signal/image processing; artificial intelligence in medicine
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, deep learning techniques have been widely used in medical image analysis. These techniques employ deep neural networks to automatically extract multi-level, multi-scale, abundant information (features) from image data, which is hard for conventional machine learning techniques which use hand-crafted feature parameters, including supervised learning (with task-driven models), unsupervised or generative learning (with data-driven models), semi-supervised learning (with hybrid task-driven and data-driven models), reinforcement learning (with environment-driven models), and physics-informed learning (hybrid task-driven and physics-driven models). The analyzed imaging modalities can include structural imaging such as X-ray imaging, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound imaging, and ultrasound computed tomography, as well as functional imaging such as functional MRI, positron emission tomography (PET), single-photon emission computed tomography (SPECT), and functional ultrasound imaging, whether two-dimensional, three-dimensional, or even four-dimensional (three-dimensional plus temporal). The vast applications of deep learning techniques in medical image analysis cover lesion detection and segmentation, disease diagnosis, treatment monitoring, efficacy evaluation, prognostic prediction, and even biomechanical analysis. In addition to medical image post-processing, deep learning techniques can also be applied to the front-end (e.g., image reconstruction) to enhance the quality of medical imaging.

Given the high level of research interest and clinical application prospects, deep learning techniques have continued to develop, especially in the field of medical image analysis. This Special Issue aims to report on state-of-the-art deep learning techniques applied to medical image analysis. Contributions related to deep learning techniques in medical image analysis are welcome.

Dr. Zhuhuang Zhou
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at mdpi.longhoe.net by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • supervised learning
  • unsupervised learning
  • semi-supervised learning
  • self-supervised learning
  • generative learning
  • deep neural networks
  • convolutional neural networks
  • physics-informed neural networks
  • X-ray imaging
  • computed tomography (CT)
  • magnetic resonance imaging (MRI)
  • ultrasound imaging
  • ultrasound computed tomography
  • functional MRI
  • positron emission tomography (PET)
  • single-photon emission computed tomography (SPECT)
  • functional ultrasound imaging
  • image reconstruction

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 5659 KiB  
Article
Exploring the Impact of Noise and Image Quality on Deep Learning Performance in DXA Images
by Dildar Hussain and Yeong Hyeon Gu
Diagnostics 2024, 14(13), 1328; https://doi.org/10.3390/diagnostics14131328 - 22 Jun 2024
Viewed by 401
Abstract
Background and Objective: Segmentation of the femur in Dual-Energy X-ray (DXA) images poses challenges due to reduced contrast, noise, bone shape variations, and inconsistent X-ray beam penetration. In this study, we investigate the relationship between noise and certain deep learning (DL) techniques for [...] Read more.
Background and Objective: Segmentation of the femur in Dual-Energy X-ray (DXA) images poses challenges due to reduced contrast, noise, bone shape variations, and inconsistent X-ray beam penetration. In this study, we investigate the relationship between noise and certain deep learning (DL) techniques for semantic segmentation of the femur to enhance segmentation and bone mineral density (BMD) accuracy by incorporating noise reduction methods into DL models. Methods: Convolutional neural network (CNN)-based models were employed to segment femurs in DXA images and evaluate the effects of noise reduction filters on segmentation accuracy and their effect on BMD calculation. Various noise reduction techniques were integrated into DL-based models to enhance image quality before training. We assessed the performance of the fully convolutional neural network (FCNN) in comparison to noise reduction algorithms and manual segmentation methods. Results: Our study demonstrated that the FCNN outperformed noise reduction algorithms in enhancing segmentation accuracy and enabling precise calculation of BMD. The FCNN-based segmentation approach achieved a segmentation accuracy of 98.84% and a correlation coefficient of 0.9928 for BMD measurements, indicating its effectiveness in the clinical diagnosis of osteoporosis. Conclusions: In conclusion, integrating noise reduction techniques into DL-based models significantly improves femur segmentation accuracy in DXA images. The FCNN model, in particular, shows promising results in enhancing BMD calculation and clinical diagnosis of osteoporosis. These findings highlight the potential of DL techniques in addressing segmentation challenges and improving diagnostic accuracy in medical imaging. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Medical Image Analysis)
Show Figures

Figure 1

14 pages, 3521 KiB  
Article
Performance Comparison of Convolutional Neural Network-Based Hearing Loss Classification Model Using Auditory Brainstem Response Data
by Jun Ma, Seong Jun Choi, Sungyeup Kim and Min Hong
Diagnostics 2024, 14(12), 1232; https://doi.org/10.3390/diagnostics14121232 - 12 Jun 2024
Viewed by 420
Abstract
This study evaluates the efficacy of several Convolutional Neural Network (CNN) models for the classification of hearing loss in patients using preprocessed auditory brainstem response (ABR) image data. Specifically, we employed six CNN architectures—VGG16, VGG19, DenseNet121, DenseNet-201, AlexNet, and InceptionV3—to differentiate between patients [...] Read more.
This study evaluates the efficacy of several Convolutional Neural Network (CNN) models for the classification of hearing loss in patients using preprocessed auditory brainstem response (ABR) image data. Specifically, we employed six CNN architectures—VGG16, VGG19, DenseNet121, DenseNet-201, AlexNet, and InceptionV3—to differentiate between patients with hearing loss and those with normal hearing. A dataset comprising 7990 preprocessed ABR images was utilized to assess the performance and accuracy of these models. Each model was systematically tested to determine its capability to accurately classify hearing loss. A comparative analysis of the models focused on metrics of accuracy and computational efficiency. The results indicated that the AlexNet model exhibited superior performance, achieving an accuracy of 95.93%. The findings from this research suggest that deep learning models, particularly AlexNet in this instance, hold significant potential for automating the diagnosis of hearing loss using ABR graph data. Future work will aim to refine these models to enhance their diagnostic accuracy and efficiency, fostering their practical application in clinical settings. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Medical Image Analysis)
Show Figures

Figure 1

12 pages, 3205 KiB  
Article
Deep Learning Detection and Segmentation of Facet Joints in Ultrasound Images Based on Convolutional Neural Networks and Enhanced Data Annotation
by Lingeer Wu, Di **a, ** Wang, Si Chen, Xulei Cui, Le Shen and Yuguang Huang
Diagnostics 2024, 14(7), 755; https://doi.org/10.3390/diagnostics14070755 - 2 Apr 2024
Viewed by 775
Abstract
The facet joint injection is the most common procedure used to release lower back pain. In this paper, we proposed a deep learning method for detecting and segmenting facet joints in ultrasound images based on convolutional neural networks (CNNs) and enhanced data annotation. [...] Read more.
The facet joint injection is the most common procedure used to release lower back pain. In this paper, we proposed a deep learning method for detecting and segmenting facet joints in ultrasound images based on convolutional neural networks (CNNs) and enhanced data annotation. In the enhanced data annotation, a facet joint was considered as the first target and the ventral complex as the second target to improve the capability of CNNs in recognizing the facet joint. A total of 300 cases of patients undergoing pain treatment were included. The ultrasound images were captured and labeled by two professional anesthesiologists, and then augmented to train a deep learning model based on the Mask Region-based CNN (Mask R-CNN). The performance of the deep learning model was evaluated using the average precision (AP) on the testing sets. The data augmentation and data annotation methods were found to improve the AP. The AP50 for facet joint detection and segmentation was 90.4% and 85.0%, respectively, demonstrating the satisfying performance of the deep learning model. We presented a deep learning method for facet joint detection and segmentation in ultrasound images based on enhanced data annotation and the Mask R-CNN. The feasibility and potential of deep learning techniques in facet joint ultrasound image analysis have been demonstrated. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop