sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence Using Multisensory and Multimodality Information for Healthcare Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 8085

Special Issue Editors

School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
Interests: medical image analysis; computer vision; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Co-Guest Editor
School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
Interests: medical imaging; machine learning; deep learning

Special Issue Information

Dear Colleagues,

Modern artificial intelligence (AI) methods have achieved excellent performance in a variety of clinical applications, including but not limited to treatment planning, disease diagnosis, treatment response prediction, drug discovery, etc. To increase the intelligence level and performance of AI solutions in these clinical areas, it becomes crucial to comprehensively collect, understand, interpret, and integrate information from multiple sources (e.g., medical devices, self-reported questionnaires, imaging, electronic health records, etc.). This Special Issue aims to collect and report state-of-the-art methods that utilize multisensory and multimodality information for healthcare applications. Authors are invited to submit outstanding and original research manuscripts focusing on one of the following topics:

  • Multisensory and/or multimodality information fusion.
  • Novel sensory techniques or data modalities for healthcare applications.
  • Machine learning methods based on multisensory and/or multimodality data.
  • Feature selection or dimensionality reduction methods for multisensory and/or multimodality data.
  • Data mining from multiple resources.
  • Uncertainty in multisensory/multimodality data.
  • Data imputation techniques for multisensory/multimodality data.
  • Multimodality image registration.
  • Multimodality image segmentation.
  • Multimodality image feature learning.

Related AI techniques (not limited to): computer vision, machine learning, natural language processing, automation and robotics, fuzzy logic systems, evolutionary computing.

Related sensory techniques (not limited to): wearable devices, electrocardiogram (ECG), electroencephalogram (EEG), clinical laboratory tests, genomics, imaging (e.g., magnetic resonance imaging, computed tomography, X-ray, ultrasound, positron emission tomography, histology, microscopy).

Related data modalities (not limited to): texts, numerical data, medical imaging data, audio/video data.

Dr. **n Chen
Dr. Andrew King
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at mdpi.longhoe.net by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medical application
  • multisensory
  • multimodality
  • information fusion

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

13 pages, 1215 KiB  
Article
Multimodal Early Birth Weight Prediction Using Multiple Kernel Learning
by Lisbeth Camargo-Marín, Mario Guzmán-Huerta, Omar Piña-Ramirez and Jorge Perez-Gonzalez
Sensors 2024, 24(1), 2; https://doi.org/10.3390/s24010002 - 19 Dec 2023
Viewed by 1036
Abstract
In this work, a novel multimodal learning approach for early prediction of birth weight is presented. Fetal weight is one of the most relevant indicators in the assessment of fetal health status. The aim is to predict early birth weight using multimodal maternal–fetal [...] Read more.
In this work, a novel multimodal learning approach for early prediction of birth weight is presented. Fetal weight is one of the most relevant indicators in the assessment of fetal health status. The aim is to predict early birth weight using multimodal maternal–fetal variables from the first trimester of gestation (Anthropometric data, as well as metrics obtained from Fetal Biometry, Doppler and Maternal Ultrasound). The proposed methodology starts with the optimal selection of a subset of multimodal features using an ensemble-based approach of feature selectors. Subsequently, the selected variables feed the nonparametric Multiple Kernel Learning regression algorithm. At this stage, a set of kernels is selected and weighted to maximize performance in birth weight prediction. The proposed methodology is validated and compared with other computational learning algorithms reported in the state of the art. The obtained results (absolute error of 234 g) suggest that the proposed methodology can be useful as a tool for the early evaluation and monitoring of fetal health status through indicators such as birth weight. Full article
Show Figures

Figure 1

17 pages, 3557 KiB  
Article
Deduced Respiratory Scores on COVID-19 Patients Learning from Exertion-Induced Dyspnea
by Zi**g Zhang, Jianlin Zhou, Thomas B. Conroy, Samuel Chung, Justin Choi, Patrick Chau, Daniel B. Green, Ana C. Krieger and Edwin C. Kan
Sensors 2023, 23(10), 4733; https://doi.org/10.3390/s23104733 - 13 May 2023
Cited by 1 | Viewed by 1425
Abstract
Dyspnea is one of the most common symptoms of many respiratory diseases, including COVID-19. Clinical assessment of dyspnea relies mainly on self-reporting, which contains subjective biases and is problematic for frequent inquiries. This study aims to determine if a respiratory score in COVID-19 [...] Read more.
Dyspnea is one of the most common symptoms of many respiratory diseases, including COVID-19. Clinical assessment of dyspnea relies mainly on self-reporting, which contains subjective biases and is problematic for frequent inquiries. This study aims to determine if a respiratory score in COVID-19 patients can be assessed using a wearable sensor and if this score can be deduced from a learning model based on physiologically induced dyspnea in healthy subjects. Noninvasive wearable respiratory sensors were employed to retrieve continuous respiratory characteristics with user comfort and convenience. Overnight respiratory waveforms were collected on 12 COVID-19 patients, and a benchmark on 13 healthy subjects with exertion-induced dyspnea was also performed for blind comparison. The learning model was built from the self-reported respiratory features of 32 healthy subjects under exertion and airway blockage. A high similarity between respiratory features in COVID-19 patients and physiologically induced dyspnea in healthy subjects was observed. Learning from our previous dyspnea model of healthy subjects, we deduced that COVID-19 patients have consistently highly correlated respiratory scores in comparison with normal breathing of healthy subjects. We also performed a continuous assessment of the patient’s respiratory scores for 12–16 h. This study offers a useful system for the symptomatic evaluation of patients with active or chronic respiratory disorders, especially the patient population that refuses to cooperate or cannot communicate due to deterioration or loss of cognitive functions. The proposed system can help identify dyspneic exacerbation, leading to early intervention and possible outcome improvement. Our approach can be potentially applied to other pulmonary disorders, such as asthma, emphysema, and other types of pneumonia. Full article
Show Figures

Figure 1

Other

Jump to: Research

68 pages, 3865 KiB  
Systematic Review
Machine Learning for Multimodal Mental Health Detection: A Systematic Review of Passive Sensing Approaches
by Lin Sze Khoo, Mei Kuan Lim, Chun Yong Chong and Roisin McNaney
Sensors 2024, 24(2), 348; https://doi.org/10.3390/s24020348 - 6 Jan 2024
Cited by 1 | Viewed by 4673
Abstract
As mental health (MH) disorders become increasingly prevalent, their multifaceted symptoms and comorbidities with other conditions introduce complexity to diagnosis, posing a risk of underdiagnosis. While machine learning (ML) has been explored to mitigate these challenges, we hypothesized that multiple data modalities support [...] Read more.
As mental health (MH) disorders become increasingly prevalent, their multifaceted symptoms and comorbidities with other conditions introduce complexity to diagnosis, posing a risk of underdiagnosis. While machine learning (ML) has been explored to mitigate these challenges, we hypothesized that multiple data modalities support more comprehensive detection and that non-intrusive collection approaches better capture natural behaviors. To understand the current trends, we systematically reviewed 184 studies to assess feature extraction, feature fusion, and ML methodologies applied to detect MH disorders from passively sensed multimodal data, including audio and video recordings, social media, smartphones, and wearable devices. Our findings revealed varying correlations of modality-specific features in individualized contexts, potentially influenced by demographics and personalities. We also observed the growing adoption of neural network architectures for model-level fusion and as ML algorithms, which have demonstrated promising efficacy in handling high-dimensional features while modeling within and cross-modality relationships. This work provides future researchers with a clear taxonomy of methodological approaches to multimodal detection of MH disorders to inspire future methodological advancements. The comprehensive analysis also guides and supports future researchers in making informed decisions to select an optimal data source that aligns with specific use cases based on the MH disorder of interest. Full article
Show Figures

Figure 1

Back to TopTop