Next Article in Journal
Rulers2023: An Annotated Dataset of Synthetic and Real Images for Ruler Detection Using Deep Learning
Previous Article in Journal
Improvements in the Electronic Performance of ZnO-Based Varistors by Modifying the Manufacturing Process Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things

1
School of Computer Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China
2
School of Big Data and Artificial Intelligence, Chengdu Technological University, Chengdu 611730, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(24), 4923; https://doi.org/10.3390/electronics12244923
Submission received: 2 November 2023 / Revised: 4 December 2023 / Accepted: 4 December 2023 / Published: 7 December 2023
(This article belongs to the Section Artificial Intelligence)

Abstract

:
The Internet of Medical Things (IoMT) can automate breast tumor detection and classification with the potential of artificial intelligence. However, the leakage of sensitive data can cause harm to patients. To address this issue, this study proposed an intrauterine breast cancer diagnosis method, namely “Privacy-Embedded Lightweight and Efficient Automated (PLA)”, for IoMT, which represents an approach that combines privacy-preserving techniques, efficiency, and automation to achieve our goals. Firstly, our model is designed to achieve lightweight classification prediction and global information processing of breast cancer by utilizing an advanced IoMT-friendly ViT backbone. Secondly, PLA protects patients’ privacy by federated learning, taking the classification task of breast cancer as the main task and introducing the texture analysis task of breast cancer images as the auxiliary task to train the model. For our PLA framework, the classification accuracy is 0.953, the recall rate is 0.998 for the best, the F1 value is 0.969, the precision value is 0.988, and the classification time is 61.9 ms. The experimental results show that the PLA model performs better than all of the comparison methods in terms of accuracy, with an improvement of more than 0.5%. Furthermore, our proposed model demonstrates significant advantages over the comparison methods regarding time and memory.

1. Introduction

Breast cancer stands as the most prevalent form of cancer among women, with its incidence steadily increasing. Currently, treatment strategies for breast cancer primarily rely on data derived from female patients, disregarding the distinct molecular features observed in male breast cancer cases. As a result, breast cancer has emerged as a significant health concern, necessitating a treatment plan closely aligned with its pathological classification [1,2]. It is vital to recognize that breast cancer is a complex disease encompassing various subtypes, highlighting the importance of tailoring treatment approaches to the unique tumor types and characteristics exhibited by each patient.
In recent years, there have been considerable research studies focusing on utilizing deep learning techniques for the automatic classification of breast cancer based on its pathology [3,4,5,6,7,8,9,10,11,12,13,14]. Nonetheless, medical institutions, being the proprietors of image data, have exhibited a preference for training models using their data due to the stringent privacy and security regulations surrounding medical data [1]. However, this poses a challenge for institutions with limited sample sizes, as training a model that achieves the desired performance for breast cancer detection becomes arduous. Deep convolutional neural networks (DCNNs), recognized as powerful models for general image classification, have been widely employed. However, their performance tends to be sub-optimal when confronted with imbalanced datasets, as demonstrated by a quantitative investigation [15,16,17,18].
Moreover, models trained on limited samples frequently exhibit limited generalization capabilities attributed to the absence of sample diversity [19,20]. Consequently, there is a critical need to develop a framework that not only ensures accurate diagnosis but also safeguards patient privacy, given the severity of breast cancer. Therefore, it becomes imperative to undertake a private and secure lightweight multi-class classification approach for breast cancer.
Furthermore, federated learning (FL) facilitates knowledge fusion by sharing model parameters among clients through federated training instead of data sharing [21,22,23,24]. This approach preserves data privacy and security while enabling collaborative learning across distributed devices. By leveraging FL, the model parameters are aggregated and updated locally at each client, ensuring that sensitive information remains decentralized. This decentralized approach protects individual data and allows for more efficient and scalable machine-learning models in the context of distributed environments. Additionally, FL mitigates data transmission and storage concerns, making it a promising solution for privacy-preserving collaborative learning in healthcare applications.
Nevertheless, the existing research landscape lacks comprehensive exploration into deep-learning-based approaches for breast cancer classification that effectively address two imperative considerations.
1.
The oversight of privacy concerns poses potential risks that may detrimentally affect patient interests.
2.
The cost hurdles associated with image processing and modeling remain inadequately resolved, culminating in sub-optimal classification outcomes.
The primary aim of this study is to accomplish a thorough and accurate classification of extensive breast cancer tissue samples by leveraging deep learning methodologies. This entails the capability to discern nuanced variations among different breast cancer subtypes. Health applications in smartphones have revolutionized the culture of self-care, enhancing individuals’ ability to monitor and manage their health. These applications have become increasingly popular, contributing to the growing trend of proactive healthcare practices [25]. Expanding upon the abovementioned limitations and motivation, this research harnesses MobileViT and FL techniques to propose a privacy-secure, lightweight, and efficient framework for accurate breast cancer diagnosis and classification. This paper delves into a comprehensive analysis of breast cancer theory, utilizing it as the foundation for method design. Specifically, the proposed approach entails image enhancement, extraction of texture features for classification, and subsequent training of the network model using FL to achieve optimal classification outcomes. Through this study, we aim to enhance the efficacy of patient-protection lightweight multi-class classification for breast cancer.
In summary, this study makes the following notable contributions:
1.
The development of PLA, a deep-learning-based approach aimed at facilitating breast cancer diagnosis within the context of the Internet of Medical Things (IoMT).
2.
The adoption of MobileViT as the foundational backbone of the framework to achieve the desired lightweight and efficient objectives.
3.
Implementing federated learning techniques to safeguard patient data privacy during the training process, specifically on institutional IoMT devices, ensuring timely diagnosis.
4.
A comprehensive evaluation of our model through multiple experiments, resulting in competitive and noteworthy outcomes.
This paper’s organization is as follows: we demonstrate relative works and theoretical analysis in Section 1. We present the backbone of our framework in Section 2. We introduce model design details in Section 3. Implementation setups are described in Section 4. In Section 5, we present the evaluation metrics and their corresponding results. Finally, we discuss the results in Section 6 and provide a conclusion in Section 7.

2. Relative Work

Ding et al. [26] constructed a multimodal multi-instance (MMMI) deep learning model to predict clinical data by performing a classification task. However, Ding’s method did not consider any protection of patient data, despite the model selecting 3701 patients from the Fourth Hospital of Hebei Medical University and 190 patients from four medical centers in Hebei Province as the research subjects.
Liu et al. [27] included the AlexNet-BC model in their study, augmenting its nonlinear learning capability by introducing supplementary loss functions and employing a transfer learning methodology. The network was enlarged by incorporating a fully connected layer into the original AlexNet architecture. This expanded network was pre-trained on the ImageNet dataset and further refined on a more comprehensive breast cancer dataset. Their investigations, performed on three widely used datasets (BreakHist, IDC, and UCSB), confirmed the efficacy and superiority of their suggested approach. Their technique exhibited superior performance to current algorithms at different magnifications, showcasing its durability and good generalization capabilities with an accuracy of 0.984. Nevertheless, it is crucial to acknowledge that they failed to tackle possible privacy problems for contributing patients.
Hu et al. [28] proposed a multi-view input and weighted multi-instance learning method to address the problems of large image size, unclear lesion image features, small anomaly ratio, and imbalanced classification of breast X-ray images. Firstly, the multi-view input method was used to enhance the abnormalities of breast X-ray images while obtaining more potential features from different views of breast X-ray images, and then they were weighted. The aim was to extract the most suspicious lesions from mammograms to address the issues of small abnormal proportions and imbalanced categories. Finally, to verify the effectiveness of the proposed method, the verification was completed on INbreak and MIAS public datasets to prove the progressiveness of the proposed method. However, the multi-view input method generates multiple views of each breast X-ray image, which can be regarded as computationally expensive by leading to unnecessary views and redundancy. To address unnecessary views, various studies have been proposed, such as a study [29] that presents a novel method, namely WM-NMF, and another study [30] that presents an innovative technique called TEMPO for handling multi-view problem efficiently.
Zhang et al. [31] proposed a method for the differential diagnosis of benign pulmonary nodules based on a deep learning multi-classification model. This method uses the ResNet18 network enhanced by the CBAM attention mechanism and a coarse-to-fine multi-classification framework. It analyzes the feature activation maps of deep learning classification methods, aiming to design a fast online pathological evaluation method. Their work achieved 0.957 accuracy, which is excellent.
Fan et al. [32] sought to employ radiomic characteristics derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion-weighted imaging (DWI) to jointly predict the histological grade and Ki-67 expression level of breast cancer. Their methodology included a multi-task learning framework that aimed to predict Ki-67 expression and tumor grade simultaneously by utilizing separate radiomic signals extracted from several MRI sequences. This paradigm considers the common and distinct characteristics of features obtained from diverse picture sources. They assessed their methods using multi-task elastic net regression for selecting features and a multi-task classifier for predicting classifications. This evaluation was conducted using DCE-MRI and DWI data from 144 patients diagnosed with invasive breast cancer, along with matching pathology reports. Further, the study utilized a leave-one-out cross-validation procedure to evaluate the predictive efficacy of several single-task and multi-task learning techniques. The results demonstrated that using a multi-task learning strategy significantly improved the precision in predicting Ki-67 expression and tumor grade. Nevertheless, this approach entails selecting radiomic characteristics linked to two clinical indications, simultaneously requiring the optimization of both the input and output matrices for each MRI session and the resolution of a computationally intricate convex optimization problem.

Theoretical Analysis of Breast Cancer Image Classification

Before delving into a privacy-enabled and lightweight approach for classifying breast cancer, this work thoroughly explored the theoretical aspects of analyzing breast cancer images. Researchers need a specialized understanding of pathological diagnostics due to the numerous forms and types of breast cancer pathological tissues. The mammary glands, separate from the apocrine glands in human skin, comprise lobules, ducts, and other anatomical components [31]. The leading cause of breast cancer is the excessive growth of glandular or ductal epithelial cells, known as hyperplasia [33]. Figure 1, obtained from the BreakHist dataset, depicts benign and malignant breast cancers.
Figure 1 shows the typical electron microscope structure of mature breast tissue slices, generally vertical and columnar cells arranged in a single layer, though they can occasionally reveal uniform layering. The hallmarks of benign tumors include a rise in the number of cells in the tumor, a flattening out of the cell shape, a darker nuclear stain, and clusters or sheets of structural cells. On the other hand, tumor cells that are cancerous exhibit heteromorphic characteristics, such as metastasis to other organs, a variety of shapes and sizes, elevated levels of dispersed cells with different shapes, weaker cell cohesion, and deeper nuclear staining [28,34].
Further categorization is required to differentiate these diverse cell pictures over three degrees accurately. Expanding on this categorization, the research intends to develop a streamlined approach for classifying breast cancer into many categories.

3. Backbone Design of the Model

The PLA model is proposed to address privacy-embedded, lightweight, efficient, and automated classification methods of breast cancer. To improve the ability of the model to extract features and process global and local features, the backbone feature network was designed, and the improved network model was used to extract texture features of breast cancer images to improve its classification and detection performance. The specific design is as follows:

3.1. Image Enhancement Processing

This study selected samples from the BreakHist breast cancer dataset. After completing breast cancer image acquisition, to improve the classification and detection effect of subsequent models on breast cancer images, breast cancer images were enhanced [35]. Let the input breast cancer image be detected by X ( X R H × W × C ) , where H, W, and C represent the height, width, and channel number of the input image, respectively. In the image enhancement process, an image matrix will be generated to enhance and retain the gray area information. The specific operation formula is as follows:
X ¯ = X × M
In the formula, M is mainly affected by four parameters, which can be expressed as ( r , d , x , y ) , r represents the proportion of gray in the detected image unit, d represents the side length of the image unit, and x, y represent the distance from the image boundary to the complete image unit [36]. To sum up, the discrete information in the image is deleted, and the image preprocessing is realized.

3.2. Image Texture Feature Extraction

After image preprocessing is completed, image features are extracted as an auxiliary task, and details are as follows:
1.
Roughness: Roughness mainly reflects the change in image’s gray level and can be expressed as:
F roghness = 1 H × N i = 1 H j = 1 N S kbest ( i , j )
where ( i , j ) represents the position coordinate of the pixel; S kbest represents the optimal size of each pixel in the breast cancer image unit; k stands for pixel range.
2.
Contrast: The contrast reflects the distribution of pixel intensity in the image and can be expressed as:
F contrast = σ α 4 1 4
The formula F contrast represents the peak state of the recovery statistic value, the fourth-order moment mean value, and the mean square error of the image gray value. σ represented the standard deviation, and α 4 denotes the kurtosis of the intensity histogram, which can be used as a polarization measurement indicator.
3.
Orientation: Orientation primarily reflects the concentration of image texture intensity along a certain direction, and the formula is shown below:
F dir = 1 r n p p n p ϕ ω p ( ϕ ϕ p ) 2 H D ( ϕ )
In the equation, p represents the peak, n p represents the number of peaks, ω p represents the range of peaks and valleys, r represents the normalization factor, ϕ p represents the position of the peak p, and H D ( ϕ ) represents the corresponding histogram.
4.
Linearity: Linearity within an image texture refers to a structured and organized arrangement of linear elements. These elements could manifest as straight lines, contours, or patterns contributing to visual composition. The linearity equation is referred to as:
F lin = i = 1 H j = 1 N P d ( i , j ) cos [ ( i j ) 2 π N ] i = 1 H j = 1 N P d ( i , j )
The measurement of F lin was based on a weighting scheme that assigned a weight of +1 to co-occurrences in the same direction and a weight of −1 to those in the perpendicular direction. This weighting scheme was incorporated into a mathematical function that utilized a direction co-occurrence matrix denoted by P d . Therefore, P d ( i , j ) of the matrix represented the relative frequency of neighborhoods centered at two pixels separated by a distance d along the edge direction. In other words, it indicated the frequency at which one pixel had a quantized direction i, while the other had a quantized direction j, occurring on the image.
Based on the above methods, the image’s texture characteristics can be evaluated using Equations (2)–(5). Then, PLA was designed by fusing it with the Mobile ViT (MViT) module, which is competitive with traditional ViTs on various computer vision tasks. It achieves comparable accuracy to large ViTs while being significantly more efficient. This makes it a promising candidate for deployment on mobile devices. It opens up new possibilities for using vision transformers in the IoMT field. On this basis, the MViT module was introduced as the backbone feature extraction network. A multi-scale fusion network was constructed as the prediction layer [37,38,39,40,41]. In this work, MViT mainly consisted of three parts: local feature information extraction, global feature information extraction, and feature information fusion. These three parts can improve the accuracy of classification detection while reducing computational complexity [42,43]. In the model designed for this study, feature fusion and subcategory classification were used to obtain rich image feature information to achieve the best classification and recognition effect, as shown in Figure 2. Furthermore, the designed framework was emphasized using our proposed algorithm as presented in Algorithm 1.
Algorithm 1 The proposed high-level algorithm for PLA
Electronics 12 04923 i001

4. Privacy-Embedded Lightweight Multi-Category Classification Design for Breast Cancer

Based on the backbone design above, our model is designed to achieve private security, lightweight efficiency, and an automated classification prediction of breast cancer. Next, this research refers to federal learning in the training stage, takes the multi-class classification task of breast cancer as the main task, and introduces the texture analysis task of breast cancer image as the auxiliary task to train the model to obtain the best classification results of breast cancer images. This study considers the cross-entropy loss function as follows:
L o s s = i = 0 s 2 I i , j obj c classes P ^ i j log ( P i j ) + ( 1 P i j ) log ( 1 P i j )
The formula signifies that the prior box within the grid contains the target and denotes the category probability [44]. To ensure the model’s privacy functionality, training is executed following the principles of federated learning, generating an aggregated network model. The detailed federate-learning-based training process is illustrated in Figure 3.
During the training process, N datasets were selected to train the model, update the parameters, and integrate them to obtain the aggregated network model. The whole procedure is described in Algorithm 1. The algorithm outlines the FL processing using a lightweight MobileViT model. Initially, local training is performed on individual models using their sub-datasets for several epochs. Subsequently, the algorithm aggregates the parameters from all models to create global results. Each model is then updated with global parameters, which iterates for a specified number of iterations. Finally, the updated models are aggregated, representing the collaborative knowledge learned from all local datasets and models.
The process is as follows:
1.
Initialize the model: First, the model parameters are initialized on the central server, and the initial parameters are sent to the devices participating in the training.
M o b i l e V i T = i n i t i a l i z e _ m o d e l ( )
Initialize_model() represents the initialization function [45,46].
2.
Distribution of the models and the data: The initialized model is sent to each corresponding device, while the corresponding local dataset is distributed to them. The devices can be different devices or machines, such as smartphones, edge devices, etc.
3.
Local training: The model is trained using the device to update its parameters. This process is performed in parallel on the device without transferring the raw data to the central server. The parameter update formula is:
M o d e l n _ n e w = l o c a l _ t r a i n ( M o d e l n , d a t a n )
where local_train() represents the local training function, local datasets, and the Mobile ViT Network model.
4.
Parameter aggregation: After the local training is completed, the device sends the respective model parameters to the central server. The central server aggregates these parameters based on pre-defined aggregation strategies (such as the weighted average) to obtain a new global model parameter with the formula:
G l o b a l P a r a m s = a g g r e g a t e _ p a r a m s ( ( M o d e l n _ n e w ) )
where aggregate_params() represents the aggregate function of the parameter.
5.
Update model: The central server sends the aggregated global model parameters back to the device to update their local models.
6.
Final aggregation: Determine whether convergence conditions are met. If not, repeat steps 3 to 5. If so, the final model parameters can be aggregated using the central server, and the aggregated network model after integration can be expressed as:
A g g r M o d e l = a g g r _ m o d e l s ( M o d e l 1 _ u p d a t e d , M o d e l 2 _ u p d a t e d )
where aggregate_models() represents the model integration function and two model update parameters. The model training is completed through the above, and the optimal classification results are obtained.

5. Implementation

5.1. System Implementation Details

The system was implemented on a Windows 10 desktop PC for this investigation. The computer was equipped with an 11th generation Intel 8-core CPU and had an NVIDIA GeForce RTX 3080 Ti GPU, which had a 12 GB RAM capacity.
Based on this, the centralized classification model training on data owned by a single client and the single Mobile ViT network model was used to carry out the comparative experimental test to verify the performance of the method, and three comparison methods were selected to juxtapose the classification efficiency.

5.2. Dataset

The BreakHist database consists of 7909 breast histopathology images collected from 82 individuals. The dataset classifies the visuals into two main categories: benign and malignant cancers. Each primary category is subdivided into four subgroups, differentiating the tumor’s morphology when observed under a microscope. Pathologists can collect data by digitizing and annotating slides utilizing full-frame image (WSI) scanners integrated into laboratory information systems. During this procedure, samples were obtained at four distinct levels of magnification: 40×, 100×, 200×, and 400× as shown in Table 1 below.

5.3. Metrics

The assessment metrics are computed based on a confusion matrix, which provides information on the predicted true negatives, false positives, false negatives, and the overall number of predictions. Accuracy, precision, recall, F1 score, and balanced accuracy are all components of these metrics. The specificity of the model quantifies its ability to identify false negatives accurately. The recall metric quantifies the model’s ability to identify positive outcomes accurately. Accuracy is a metric that evaluates the percentage of correctly classified cases, both positive and negative. A high degree of precision in classification implies that most positive and negative scenarios were accurately classified. The F1 score determines the optimal balance between recall and accuracy. When assessing the effectiveness of a model, balanced accuracy, which is the mean of recall and specificity, is a crucial metric, particularly when there is an unequal distribution of classes.

6. Evaluations

A comparison between several models, including ours, is presented in Table 2 using accuracy metrics from the literature. Upon evaluation using the test dataset, the model with the highest accuracy was 0.947. Nevertheless, our model performs better than others by attaining the maximum accuracy of 0.953, thus showcasing its exceptional performance.
To assess the performance of our model, we followed a conventional train/test split approach: 70% of the accessible data were allocated for training purposes, while 20% were set aside for testing. By employing this approach, an unbiased evaluation of the model’s capability for generalization is guaranteed.
In addition, we performed an additional experiment employing a hold-out validation strategy to verify the robustness of our model. A subset comprising 10% of the total available data was chosen and employed as the validation set. Utilizing a validation set distinct from the training and testing data allowed us to evaluate the model’s performance on samples that had not yet been encountered.
Our PLA model is prepared for integration into the systems of the participating client institutions within the IoMT framework. It provides medical professionals with significant computer-assisted assistance in their diagnostic procedures. The model specifically examines digitized biopsies submitted to patients’ medical records. This real-time analysis enhances the confidence of pathologists and other medical experts in tumor classification, leading to more accurate and reliable diagnoses.
By leveraging the capabilities of our PLA model within the IoMT systems, medical professionals can make more informed decisions and provide improved patient care. The integration of our model has the potential to significantly enhance the efficiency and accuracy of tumor classification, contributing to better treatment outcomes and patient management.
Three diverse experiments were examined to carry out privacy-embedded lightweight multi-class classification of breast cancer, and the classification accuracy of the three methods was evaluated. The comparative results are shown in Figure 4.
The results show that the classification accuracy of all three approaches decreases with an increasing sample size (see Figure 4). Nevertheless, our suggested model reliably achieves classification accuracies over 0.953 regardless of the sample size. The comparison approach, on the other hand, reliably obtains an accuracy rate lower than 0.935. These noteworthy variations highlight how our suggested approach outperforms the state-of-the-art in private-security lightweight multi-category breast cancer categorization. The classification recall rates of the three approaches are compared in Figure 5 for a more in-depth look at performance. With an average recall of 0.968, the comparison approach falls short compared to the suggested method’s 0.998 recall record. These results provide more evidence that our proposed approach works in this setting. In addition, Table 3 displays the F1 value, computed by a thorough study incorporating recall rates and classification accuracy.
As shown in Table 3, using the model designed in this paper, the F1 values were 0.969 on average, which is better than the comparison method, demonstrating that the proposed classification method has a high recall and classification accuracy.
It is important to acknowledge that the BreakHist dataset exhibits a severe data imbalance issue, with the number of benign images being approximately 2.2 times greater than that of malignant images. This significant class imbalance can lead to a classification model that is overly accurate for benign images but performs poorly on malignant images.
In our analysis, we observed that the result for the EXP-A setup, which included 4000 samples, performed better than all other setups. However, it is essential to note that this improvement might have occurred by coincidence, as the splitting strategy handled the data imbalance problem more effectively in this particular setup.
In contrast, the EXP-B and EXP-C setups, which utilized all available samples, consistently achieved the best results across all evaluations. Therefore, we have decided to consider the results obtained using all samples as the best result for the EXP-A setup. This ensures that the evaluation is more robust and representative of the model’s performance across the dataset, considering the inherent data imbalance challenge.
Finally, apply three methods for classification and compare their classification efficiency. The comparison results are shown in Table 4.
Table 4 demonstrates that using our suggested approach for classification results in an average processing time of under 61.9 ms, which is significantly faster than the comparative technique, which has an average processing time of over 70.3 ms. The design model outperforms ResNet18, VGG11, and Inception v3 in terms of both time and memory efficiency, being faster by 17.8–23.3%, 25.2–30.7%, and 29.2–33.5%, respectively, while requiring 79.1–81.6%, 92.6–94.4%, and 94.4–96.2% less memory. The significant disparity highlights the better efficacy in categorization and advantageous real-world implementation of our approach in picture classification.
Figure 6 showcases the comprehensive results for the PLA model’s overall performance, which remains satisfactory. However, it is worth mentioning that the results of the 1000 and 2000 samples’ experiments were not as good as we anticipated, which are plateaus in the early stage. Our analysis suggests this is likely due to the smaller number of samples used in those experiments. Throughout all the experiments, the average train loss of our model is close to zero. The validation loss is 0.23 for the best, and the average test loss is lower than 0.24 across all experiments.
Moving to Figure 7, it demonstrates that the loss for all experiments rises after a certain number of epochs. The result indicates that the model is overfitting to the training data. Compared to our PLA model, the training loss is much higher, and the validation loss and test loss are near 0.8, which is much worse than our PLA framework numerically. From our perspective, for medical care applications, it is unacceptable for a model’s accuracy to fluctuate drastically due to overfitting.
Lastly, in Figure 8, the experiments for the original mobile vision transformer model performed similarly to the centralized PLA model. While these findings indicate a comparable performance, the observed loss plots suggest room for further improvement to render it more suitable for deployment in real medical settings.

7. Discussion

Though our method significantly improves accuracy while reducing unnecessary computational overhead, it still has some disadvantages compared to traditional centralized training. One disadvantage is the possible performance degradation due to incomplete data. In FL, each client only has access to local data, which may not represent the entire dataset. This limited data availability can lead to reduced model performance, as the model may not be able to generalize effectively without a complete dataset.
Another disadvantage is the presence of client heterogeneity. Different clients may differ in computing power, network connections, and data quality in FL. This heterogeneity can create challenges in coordinating the training process and aggregating model updates. Clients with limited resources or unreliable connections may not contribute effectively to training, potentially affecting overall model performance.
Finally, the proposed approach has some limitations that must be considered. One of them is that the model is designed to be lightweight and compact. While this design choice may have advantages regarding efficiency and resource usage, there may also be potential performance limitations when processing large datasets. Theoretical assumptions about how small models will perform on large datasets may not provide explicit answers because there may be uncertainty about how small models will generalize and handle more complex datasets. Furthermore, the pursuit of lightweight models may impose limitations on the capacity and representation capabilities of the model, which may limit its ability to capture complex patterns and nuances in data, which may be critical for specific complex tasks or datasets.
It is essential to recognize these limitations and carefully evaluate the trade-offs between model size, performance, and the specific requirements of a particular application. Further empirical studies and experiments may be needed to evaluate the performance of the proposed method on large datasets and to understand its limitations more precisely.

8. Conclusions

This study used the BreakHist dataset for analysis to propose a privacy-embedded, lightweight, and efficient framework. The efficacy of our approach lies in its incorporation of theoretical knowledge on breast cancer, utilizing image enhancement processing and the PLA model to extract crucial textural characteristics. Image enhancement improved the quality of the image by highlighting the characteristics of breast cancer lesions. Additionally, the proposed framework uses federated learning principles and is intended for deployment in IoMT applications. FL revolutionizes the training of PLA models by enabling collaborative learning without compromising data privacy. It also provides a secure and confidential framework for training models on sensitive or personal data by kee** data decentralized and utilizing privacy-preserving techniques. Moreover, each client only needs to use local data for model training and then transmit model updates to the central server for aggregation. This avoids large data transmission and reduces communication overhead and network latency during training. In addition, it allows model training to be performed in parallel on multiple clients to minimize the training time. Each client can independently perform model updates and parameter optimization without waiting for other clients to complete training. This parallel training method significantly improves training efficiency and reduces the overall training time. Therefore, it substantially reduces the model training time by allocating training tasks to multiple clients for parallel processing while reducing data transmission and communication overhead and utilizing the similar computing capabilities of multiple clients. Through comprehensive experiments, we evaluated the design model, centralized model, and single mobile ViT network model from various perspectives, and the results demonstrated the proposed method’s feasibility, efficiency, and superiority.

Author Contributions

Conceptualization, C.Y., X.Z. and A.A. writing—original draft preparation, C.Y.; writing—review and editing, A.A. and M.H.T.; supervision, R.X. and M.H.; project administration, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data are publicly available at https://www.kaggle.com/datasets/ambarish/BreakHist (accessed on 5 October 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, M.H.; Liu, Z.N.; Mu, T.J.; Hu, S.M. Beyond self-attention: External attention using two linear layers for visual tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5436–5447. [Google Scholar] [CrossRef]
  2. Kayikci, S.; Khoshgoftaar, T.M. Breast cancer prediction using gated attentive multimodal deep learning. J. Big Data 2023, 10, 1–11. [Google Scholar] [CrossRef]
  3. Hamedani-KarAzmoudehFar, F.; Tavakkoli-Moghaddam, R.; Tajally, A.R.; Aria, S.S. Breast cancer classification by a new approach to assessing deep neural network-based uncertainty quantification methods. Biomed. Signal Process. Control. 2023, 79, 104057. [Google Scholar] [CrossRef]
  4. Chakravarthy, S.S.; Bharanidharan, N.; Rajaguru, H. Deep Learning-Based Metaheuristic Weighted K-Nearest Neighbor Algorithm for the Severity Classification of Breast Cancer. IRBM 2023, 44, 100749. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Liu, Y.L.; Nie, K.; Zhou, J.; Chen, Z.; Chen, J.H.; Wang, X.; Kim, B.; Parajuli, R.; Mehta, R.S.; et al. Deep learning-based automatic diagnosis of breast cancer on MRI using mask R-CNN for detection followed by ResNet50 for classification. Acad. Radiol. 2023, 30, S161–S171. [Google Scholar] [CrossRef]
  6. Tekin, E.; Yazıcı, Ç.; Kusetogullari, H.; Tokat, F.; Yavariabdi, A.; Iheme, L.O.; Çayır, S.; Bozaba, E.; Solmaz, G.; Darbaz, B.; et al. Tubule-U-Net: A novel dataset and deep learning-based tubule segmentation framework in whole slide images of breast cancer. Sci. Rep. 2023, 13, 128. [Google Scholar] [CrossRef]
  7. Uddin, K.M.M.; Biswas, N.; Rikta, S.T.; Dey, S.K. Machine learning-based diagnosis of breast cancer utilizing feature optimization technique. Comput. Methods Programs Biomed. Update 2023, 3, 100098. [Google Scholar] [CrossRef]
  8. Ahmed, A.; **, R.; Hou, M.; Shah, S.A.; Hameed, S. Harnessing Big Data Analytics for Healthcare: A Comprehensive Review of Frameworks, Implications, Applications, and Impacts. IEEE Access 2023, 11, 112891–112928. [Google Scholar] [CrossRef]
  9. Wang, X.; Ahmad, I.; Javeed, D.; Zaidi, S.A.; Alotaibi, F.M.; Ghoneim, M.E.; Daradkeh, Y.I.; Asghar, J.; Eldin, E.T. Intelligent Hybrid Deep Learning Model for Breast Cancer Detection. Electronics 2022, 11, 2767. [Google Scholar] [CrossRef]
  10. Alzubaidi, L.; Al-Shamma, O.; Fadhel, M.A.; Farhan, L.; Zhang, J.; Duan, Y. Optimizing the performance of breast cancer classification by employing the same domain transfer learning from hybrid deep convolutional neural network model. Electronics 2020, 9, 445. [Google Scholar] [CrossRef]
  11. Aldhyani, T.H.; Khan, M.A.; Almaiah, M.A.; Alnazzawi, N.; Hwaitat, A.K.A.; Elhag, A.; Shehab, R.T.; Alshebami, A.S. A Secure internet of medical things Framework for Breast Cancer Detection in Sustainable Smart Cities. Electronics 2023, 12, 858. [Google Scholar] [CrossRef]
  12. Gui, H.; Su, T.; Pang, Z.; Jiao, H.; **ong, L.; Jiang, X.; Li, L.; Wang, Z. Diagnosis of Breast Cancer with Strongly Supervised Deep Learning Neural Network. Electronics 2022, 11, 3003. [Google Scholar] [CrossRef]
  13. Li, J.; Shi, J.; Su, H.; Gao, L. Breast cancer histopathological image recognition based on pyramid gray level co-occurrence matrix and incremental broad learning. Electronics 2022, 11, 2322. [Google Scholar] [CrossRef]
  14. Liang, H.; Li, J.; Wu, H.; Li, L.; Zhou, X.; Jiang, X. Mammographic Classification of Breast Cancer Microcalcifications through Extreme Gradient Boosting. Electronics 2022, 11, 2435. [Google Scholar] [CrossRef]
  15. Masko, D.; Hensman, P. The Impact of Imbalanced Training Data for Convolutional Neural Networks, 2015; Degree Project in Computer Science; KTH: Stockholm, Sweden, 2015; Available online: https://www.kth.se/social/files/588617ebf2765401cfcc478c/PHensmanDMasko_dkand15.pdf (accessed on 15 October 2023).
  16. Fu, Y.; Du, Y.; Cao, Z.; Li, Q.; **ang, W. A deep learning model for network intrusion detection with imbalanced data. Electronics 2022, 11, 898. [Google Scholar] [CrossRef]
  17. Hassanat, A.B.; Tarawneh, A.S.; Abed, S.S.; Altarawneh, G.A.; Alrashidi, M.; Alghamdi, M. Rdpvr: Random data partitioning with voting rule for machine learning from class-imbalanced datasets. Electronics 2022, 11, 228. [Google Scholar] [CrossRef]
  18. Yang, H.; Xu, J.; **ao, Y.; Hu, L. SPE-ACGAN: A Resampling Approach for Class Imbalance Problem in Network Intrusion Detection Systems. Electronics 2023, 12, 3323. [Google Scholar] [CrossRef]
  19. Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef]
  20. Zhou, X.; Xu, X.; Liang, W.; Zeng, Z.; Yan, Z. Deep-learning-enhanced multitarget detection for end–edge–cloud surveillance in smart IoT. IEEE Internet Things J. 2021, 8, 12588–12596. [Google Scholar] [CrossRef]
  21. Li, L.; **e, N.; Yuan, S. A Federated Learning Framework for Breast Cancer Histopathological Image Classification. Electronics 2022, 11, 3767. [Google Scholar] [CrossRef]
  22. Agbley, B.L.Y.; Li, J.P.; Haq, A.U.; Bankas, E.K.; Mawuli, C.B.; Ahmad, S.; Khan, S.; Khan, A.R. Federated Fusion of Magnified Histopathological Images for Breast Tumor Classification in the Internet of Medical Things. IEEE J. Biomed. Health Inform. 2023, 1–12. [Google Scholar] [CrossRef]
  23. Shaheen, M.; Farooq, M.S.; Umer, T.; Kim, B.S. Applications of federated learning; Taxonomy, challenges, and research trends. Electronics 2022, 11, 670. [Google Scholar] [CrossRef]
  24. Kandati, D.R.; Gadekallu, T.R. Federated learning approach for early detection of chest lesion caused by COVID-19 infection using particle swarm optimization. Electronics 2023, 12, 710. [Google Scholar] [CrossRef]
  25. Al Husaini, M.A.S.; Hadi Habaebi, M.; Gunawan, T.S.; Islam, M.R. Self-detection of early breast cancer application with infrared camera and deep learning. Electronics 2021, 10, 2538. [Google Scholar] [CrossRef]
  26. Ding, Y.; Yang, F.; Han, M.; Li, C.; Wang, Y.; Xu, X.; Zhao, M.; Zhao, M.; Yue, M.; Deng, H.; et al. Multi-center study on predicting breast cancer lymph node status from core needle biopsy specimens using multi-modal and multi-instance deep learning. NPJ Breast Cancer 2023, 9, 58. [Google Scholar] [CrossRef]
  27. Liu, M.; Hu, L.; Tang, Y.; Wang, C.; He, Y.; Zeng, C.; Lin, K.; He, Z.; Huo, W. A deep learning method for breast cancer classification in the pathology images. IEEE J. Biomed. Health Inform. 2022, 26, 5025–5032. [Google Scholar] [CrossRef]
  28. Hu, T.; Zhang, L.; **e, L.; Yi, Z. A multi-instance networks with multiple views for classification of mammograms. Neurocomputing 2021, 443, 320–328. [Google Scholar] [CrossRef]
  29. Liu, S.S.; Lin, L. Adaptive Weighted Multi-View Clustering. In Proceedings of the Conference on Health, Inference, and Learning. PMLR, Cambridge, MA, USA, 22–24 June 2023; pp. 19–36. [Google Scholar]
  30. Choudhury, R.; Kitani, K.M.; Jeni, L.A. TEMPO: Efficient multi-view pose estimation, tracking, and forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 14750–14760. [Google Scholar]
  31. Zhang, Y.; Feng, W.; Wu, Z.; Li, W.; Tao, L.; Liu, X.; Zhang, F.; Gao, Y.; Huang, J.; Guo, X. Deep-Learning Model of ResNet Combined with CBAM for Malignant–Benign Pulmonary Nodules Classification on Computed Tomography Images. Medicina 2023, 59, 1088. [Google Scholar] [CrossRef]
  32. Fan, M.; Yuan, W.; Zhao, W.; Xu, M.; Wang, S.; Gao, X.; Li, L. Joint prediction of breast cancer histological grade and Ki-67 expression level based on DCE-MRI and DWI radiomics. IEEE J. Biomed. Health Inform. 2019, 24, 1632–1642. [Google Scholar] [CrossRef]
  33. Johnson, M.; Stanczak, B.; Winblad, O.D.; Amin, A.L. Breast MRI assists in decision-making for surgical excision of atypical ductal hyperplasia. Surgery 2023, 173, 612–618. [Google Scholar] [CrossRef]
  34. Burçak, K.C.; Baykan, Ö.K.; Uğuz, H. A new deep convolutional neural network model for classifying breast cancer histopathological images and the hyperparameter optimisation of the proposed model. J. Supercomput. 2021, 77, 973–989. [Google Scholar] [CrossRef]
  35. Li, J.; Mi, W.; Guo, Y.; Ren, X.; Fu, H.; Zhang, T.; Zou, H.; Liang, Z. Artificial intelligence for histological subtype classification of breast cancer: Combining multi-scale feature maps and the recurrent attention model. Histopathology 2022, 80, 836–846. [Google Scholar] [CrossRef]
  36. Rezaei, Z. A review on image-based approaches for breast cancer detection, segmentation, and classification. Expert Syst. Appl. 2021, 182, 115204. [Google Scholar] [CrossRef]
  37. Elmannai, H.; Hamdi, M.; AlGarni, A. Deep learning models combining for breast cancer histopathology image classification. Int. J. Comput. Intell. Syst. 2021, 14, 1003. [Google Scholar] [CrossRef]
  38. Singh, L.K.; Khanna, M.; Singh, R. Artificial intelligence based medical decision support system for early and accurate breast cancer prediction. Adv. Eng. Softw. 2023, 175, 103338. [Google Scholar] [CrossRef]
  39. Yang, J.; Chen, H.; Zhao, Y.; Yang, F.; Zhang, Y.; He, L.; Yao, J. Remix: A general and efficient framework for multiple instance learning based whole slide image classification. In Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2022; pp. 35–45. [Google Scholar]
  40. Sudharshan, P.; Petitjean, C.; Spanhol, F.; Oliveira, L.E.; Heutte, L.; Honeine, P. Multiple instance learning for histopathological breast cancer image classification. Expert Syst. Appl. 2019, 117, 103–111. [Google Scholar] [CrossRef]
  41. Khadim, E.U.; Shah, S.A.; Wagan, R.A. Evaluation of activation functions in CNN model for detection of malaria parasite using blood smear images. In Proceedings of the 2021 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 9–10 November 2021; pp. 1–6. [Google Scholar]
  42. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
  43. Yan, R.; Ren, F.; Wang, Z.; Wang, L.; Zhang, T.; Liu, Y.; Rao, X.; Zheng, C.; Zhang, F. Breast cancer histopathological image classification using a hybrid deep neural network. Methods 2020, 173, 52–60. [Google Scholar] [CrossRef]
  44. Dash, S.; Parida, P.; Mohanty, J.R. Illumination robust deep convolutional neural network for medical image classification. Soft Comput. 2023, 1–13. [Google Scholar] [CrossRef]
  45. Ramadan, S.Z. Methods used in computer-aided diagnosis for breast cancer detection using mammograms: A review. J. Healthc. Eng. 2020, 2020. [Google Scholar] [CrossRef]
  46. Basodi, S.; Ji, C.; Zhang, H.; Pan, Y. Gradient amplification: An efficient way to train deep neural networks. Big Data Min. Anal. 2020, 3, 196–207. [Google Scholar] [CrossRef]
  47. Alqahtani, Y.; Mandawkar, U.; Sharma, A.; Hasan, M.N.S.; Kulkarni, M.H.; Sugumar, R. Breast Cancer Pathological Image Classification Based on the Multiscale CNN Squeeze Model. Comput. Intell. Neurosci. 2022, 2022, 7075408. [Google Scholar] [CrossRef] [PubMed]
  48. Zerouaoui, H.; Idri, A. Deep hybrid architectures for binary classification of medical breast cancer images. Biomed. Signal Process. Control 2022, 71, 103226. [Google Scholar] [CrossRef]
  49. Kumar, S.; Sharma, S. Sub-classification of invasive and non-invasive cancer from magnification independent histopathological images using hybrid neural networks. Evol. Intell. 2022, 15, 1531–1543. [Google Scholar] [CrossRef]
  50. Agarwal, P.; Yadav, A.; Mathur, P. Breast cancer prediction on breakhis dataset using deep cnn and transfer learning model. In Data Engineering for Smart Systems: Proceedings of SSIC 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 77–88. [Google Scholar]
  51. Djouima, H.; Zitouni, A.; Megherbi, A.C.; Sbaa, S. Classification of Breast Cancer Histopathological Images using DensNet201. In Proceedings of the 2022 7th International Conference on Image and Signal Processing and their Applications (ISPA), Mostaganem, Algeria, 8–9 May 2022; pp. 1–6. [Google Scholar]
  52. Singh, S.; Kumar, R. Breast cancer detection from histopathology images with deep inception and residual blocks. Multimed. Tools Appl. 2022, 81, 5849–5865. [Google Scholar] [CrossRef]
  53. Jakhar, A.K.; Gupta, A.; Singh, M. SELF: A stacked-based ensemble learning framework for breast cancer classification. Evol. Intell. 2023, 1–16. [Google Scholar] [CrossRef]
  54. Juhong, A.; Li, B.; Yao, C.Y.; Yang, C.W.; Agnew, D.W.; Lei, Y.L.; Huang, X.; Piyawattanametha, W.; Qiu, Z. Super-resolution and segmentation deep learning for breast cancer histopathology image analysis. Biomed. Opt. Express 2023, 14, 18–36. [Google Scholar] [CrossRef]
  55. Chhipa, P.C.; Upadhyay, R.; Pihlgren, G.G.; Saini, R.; Uchida, S.; Liwicki, M. Magnification prior: A self-supervised method for learning representations on breast cancer histopathological images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 2717–2727. [Google Scholar]
Figure 1. Demonstration of benign and malignant breast tumors in BreakHist.
Figure 1. Demonstration of benign and malignant breast tumors in BreakHist.
Electronics 12 04923 g001
Figure 2. Proposed PLA model architecture diagram: a comprehensive integration of transformers and convolutions for local and global feature representation, enhancing automated breast cancer diagnosis in IoMT systems.
Figure 2. Proposed PLA model architecture diagram: a comprehensive integration of transformers and convolutions for local and global feature representation, enhancing automated breast cancer diagnosis in IoMT systems.
Electronics 12 04923 g002
Figure 3. Federated learning and training process. The dataset is divided into five parts and handed over to five clients for processing. Each client will not receive complete patient information, thereby protecting privacy.
Figure 3. Federated learning and training process. The dataset is divided into five parts and handed over to five clients for processing. Each client will not receive complete patient information, thereby protecting privacy.
Electronics 12 04923 g003
Figure 4. Comparison of classification accuracy among three methods [A = design model, B = centralized model owned by a single client, C = single mobile ViT network model].
Figure 4. Comparison of classification accuracy among three methods [A = design model, B = centralized model owned by a single client, C = single mobile ViT network model].
Electronics 12 04923 g004
Figure 5. Comparison of recall rates for three classification methods [A = design model, B = centralized model owned by a single client, C = single mobile ViT network model].
Figure 5. Comparison of recall rates for three classification methods [A = design model, B = centralized model owned by a single client, C = single mobile ViT network model].
Electronics 12 04923 g005
Figure 6. EXP-A: Various Loss Plots with different data splits for showing robustness and effectiveness of our proposed PLA Framework.
Figure 6. EXP-A: Various Loss Plots with different data splits for showing robustness and effectiveness of our proposed PLA Framework.
Electronics 12 04923 g006
Figure 7. EXP-B: various loss plots with different data splits for PLA framework but centralized.
Figure 7. EXP-B: various loss plots with different data splits for PLA framework but centralized.
Electronics 12 04923 g007
Figure 8. EXP-C: various loss plots with different data splits for mobile vision transformer.
Figure 8. EXP-C: various loss plots with different data splits for mobile vision transformer.
Electronics 12 04923 g008
Table 1. BreaKHist dataset image distribution.
Table 1. BreaKHist dataset image distribution.
MagnificationBenignMalignantTotal
40×59813981996
100×64214372079
200×59414182012
400×59012321822
Total242454857909
Table 2. Summary of different models and their performance.
Table 2. Summary of different models and their performance.
ModelImage SizeFeature ExtractorTrain/Test/ValidationClassification Accuracy on 40×
Sudharshan et al. [40]Rnd (64 × 64)PFTAS70%/30%/0%0.878
Alqahtani et al. [47]Rnd (224 × 224)ResNet5085%/15%/0%0.888
Zerouaoui et al. [48]Rnd (256 × 256)Multi-modelUnknown0.926
Kumar et al. [49]Rnd (299 × 299)ResNet152Unknown0.824
Agarwal et al. [50]WSIResNet50Unknown0.947
Djouima et al. [51]Rnd (384 × 384)DensNet20170%/20%/10%0.920
Singh et al. [52]Rnd (224 × 224)Hybrid model65%/35%/0%0.808
Jakhar et al. [53]WSIRandom forest, Extra tree80%/20%/0%0.934
Juhong et al. [54]WSISRGAN-ResNeXt, Inception U-netUnknown0.947
Chhipa et al. [55]WSIResNet152 + MVPNet64%/20%/16%0.907
This modelWSIPLA70%/20%/10%0.953
Table 3. Comparison of accuracy, precision, recall, and F1 values for different sample sizes [EXP-A = design model, EXP-B = centralized model, EXP-C = single mobile ViT network model].
Table 3. Comparison of accuracy, precision, recall, and F1 values for different sample sizes [EXP-A = design model, EXP-B = centralized model, EXP-C = single mobile ViT network model].
EXP TypeMetricsNumber of Samples
7909400020001000
EXP-AACC0.9530.9650.8930.610
Precision0.9880.9580.9300.613
Recall0.9940.9980.9410.915
F10.9250.9770.9320.734
EXP-BACC0.9410.8610.8280.613
Precision0.9790.9190.9360.956
Recall0.9880.9300.8870.881
F10.9830.9180.9600.968
EXP-CACC0.8770.8450.6100.610
Precision0.8810.8830.6130.613
Recall0.9740.9490.9150.915
F10.9340.9150.7340.734
Table 4. Comparison of classification consumption time and memory usage for different sample sizes and models.
Table 4. Comparison of classification consumption time and memory usage for different sample sizes and models.
Particulars (Time/Memory)Samples
1000200040007909
Design Model Time (ms)45.252.157.859.2
Design Model Memory (MB)9.429.429.429.42
ResNet18 Time (ms)56.463.670.572.6
ResNet18 Memory (MB)48.648.648.648.6
VGG11 Time (ms)60.667.677.879.1
VGG11 Memory (MB)138138138138
Inception v3 Time (ms)64.471.678.580.1
Inception v3 Memory (MB)238238238238
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, C.; Zeng, X.; **, R.; Ahmed, A.; Hou, M.; Tunio, M.H. PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things. Electronics 2023, 12, 4923. https://doi.org/10.3390/electronics12244923

AMA Style

Yan C, Zeng X, ** R, Ahmed A, Hou M, Tunio MH. PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things. Electronics. 2023; 12(24):4923. https://doi.org/10.3390/electronics12244923

Chicago/Turabian Style

Yan, Chengxiao, **aoyang Zeng, Rui **, Awais Ahmed, Mengshu Hou, and Muhammad Hanif Tunio. 2023. "PLA—A Privacy-Embedded Lightweight and Efficient Automated Breast Cancer Accurate Diagnosis Framework for the Internet of Medical Things" Electronics 12, no. 24: 4923. https://doi.org/10.3390/electronics12244923

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop