A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
Abstract
:1. Introduction
- Revising and redefining the adversarial attack taxonomy for ML-based IDS, MDS, and DIS in the IoT context.
- Proposing a novel two-dimensional-based classification of adversarial attack generation methods.
- Proposing a novel two-dimensional-based classification of adversarial defense mechanisms.
- Providing intriguing insights and technical specifics on state-of-the-art adversarial attack methods and defense mechanisms.
- Conducting a holistic review of the recent literature on adversarial attacks within three prominent IoT security systems: IDSs, MDSs, and DISs.
2. Background
2.1. Security and Privacy Overview
2.2. Internet of Things Overview
- Perception layer: The bottom layer of any IoT framework involves “things” or endpoint objects that serve as the bridge between the physical and the digital worlds. The perception or sensing layer refers to the physical layer, encompassing sensors and actuators capable of gathering information from the real environment and transmitting it through wireless or wired connections. This layer can be vulnerable to security threats such as insertion of fake data, node capturing, malicious code, side-channel attacks, jamming attacks, sniffing or snoo**, replay attacks, and sleep deprivation attacks.
- Network layer: It is known as the second layer connecting the perception layer and middleware layer. It is also called the communication layer because it acts as a communication bridge, enabling the transfer of data acquired in the perception layer to other interconnected devices or a processing unit, conversely. This transmission utilizes various network technologies like LTE, 5G, Wi-Fi, infrared, etc. The data transfer is executed securely, ensuring the confidentiality of the obtained information. Nonetheless, persistent security vulnerabilities can manifest as data transit attacks, phishing, identity authentication, and encryption attacks, and distributed denial-of-service (DDoS/DoS) attacks.
- Middleware layer: It is also commonly known as the support layer or processing layer. It is the brain of the IoT ecosystem, and its primary functions are data processing, storage, and intelligent decision-making. The middleware layer is the best candidate to implement advanced IoT security mechanisms, such as ML-based security systems, thanks to its high computation capacity. Therefore, it is also a target of adversarial attacks and other various attacks such as SQL injection attacks, cloud malware injection, insider attacks, signature wrap** attacks, man-in-the-middle attacks, and cloud flooding attacks.
- Application layer: It is the uppermost layer within the IoT architecture. It serves as the user interface to monitor IoT devices and observe data through various application services and tools, such as dashboards and mobile applications, as well as applying various control activities by the end user. There are various use cases for IoT applications such as smart homes and cities, smart logistics and transportation, and smart agriculture and manufacturing. This layer is also subject to various security threats such as sniffing attacks, service interruption attacks, malicious code attacks, reprogramming attacks, access control attacks, data breaches, application vulnerabilities, and software bugs.
3. Adversarial Attack Taxonomy
3.1. Attacker’s Knowledge
- Full knowledge: This refers to white-box attacks, where the attacker possesses complete awareness of the target ML system’s information. This means that the adversary possesses complete and unrestricted access to the training dataset, ML model architecture, and its hyper-parameters as well as the feature learning. This is generally not feasible in most real adversarial attacks. However, the purpose of studying them is to assess the vulnerability of the target ML system to all possible cases and scenarios.
- Partial knowledge: Referring to gray-box attacks, where the attacker possesses partial information of the target ML system’s inner workings. This means that the adversary may have limited access to the feature representations, training dataset, and learning algorithm’s parameters. Using partial information, the attacker can create a practical strategy to deceive the ML model.
- No knowledge: This corresponds to black-box attacks, where the attacker is entirely unaware of the architecture and parameters of the target model. The adversary relies solely on his capability to query the target ML system by inputting the chosen data and monitoring corresponding results. These attacks are considered the most practical because they operate under the assumption that the attacker can only leverage system interfaces that are readily accessible for typical use.
3.2. Attacker’s Goal
- Security Infraction: Refers to security violations and can be classified into three main dimensions.
- Availability Attack: The attacker intends to minimize the model’s performance at testing or deployment phases, thereby making it unreliable and useless. Availability attacks can be executed through data poisoning when the attacker gains control over a portion of the training dataset, or through model extraction when the attacker predicts some relevant parameters of the target model.
- Integrity Attack: Focuses on undermining the integrity of an ML model’s output, leading to erroneous predictions made by the model. The attacker can induce an integrity breach by executing an evasion attack during the testing or deployment phases or a poisoning attack during the training phase.
- Privacy Attack: The attacker’s objective could involve gaining information about the system data, leading to data privacy attacks, or about the ML model, resulting in model privacy attacks.
- Attack Specificity: Based on their impact on the model output integrity, the attack specificity can be divided into three distinct categories:
- Confidence Reduction: The adversary intends to decrease the prediction certainty of the target model.
- Untargeted Misclassification: The adversary endeavors to change the predicted classification of an input instance to any class other than the original one.
- Targeted Misclassification: The adversary seeks to generate inputs that compel the classification model’s output to become a particular desired target class or endeavors to make the classification output for a specific input correspond to a specific target class.
3.3. Attacker’s Capability
- Training phase: In this phase, attacks on the ML model are more frequent than often realized. The attacker aims to mislead or disrupt the model’s outcomes by directly modifying the training dataset. Those kinds of attacks are known as “poisoning” or “contaminating”, and they require that an adversary has a degree of control over training data. The attacker’s tactics during the training phase are shaped by their adversarial capabilities which can be classified into three distinct categories.
- Data Injection: The attacker lacks access to the learning model’s parameters and training dataset, yet possesses the capability to append new data to the training dataset, thereby inserting adversarial samples to fool or degrade the ML model’s performance.
- Data Modification: The adversary cannot access the learning algorithms but can manipulate the training data, contaminating it before it is used to train the target model.
- Logic Corruption: The adversary can tamper with the learning algorithm of the target ML model. In other words, the learning algorithm is susceptible to interference from the opponent.
- Testing phase: In testing, adversarial attacks do not alter the training data or directly interfere with the model. Instead, they seek to make the model produce incorrect results by maliciously modifying input data. In addition to the level of information at the adversary’s disposal and, the attacker’s knowledge, the efficacy of these attacks depends on three main capabilities: adaptive attack, non-adaptive attack, and strict attack.
- Adaptive Attack: The adversary is crafting an adaptive malicious input that exploits the weak points of the ML model to mistakenly classify the malicious samples as benign. The adaptiveness can be achieved either by meticulously designing a sequence of input queries and observing their outputs in a black-box scenario or through accessing the ML model information and altering adversarial example methods that maximize the error rate in case of a white-box scenario.
- Non-adaptive attack: The adversary’s access is restricted solely to the training data distribution of the target model. The attacker starts by building a local model, choosing a suitable training procedure, and training it using samples from data distribution to mimic the target classifier’s learned model. Leveraging this local model, the adversary creates adversarial examples and subsequently applies these manipulated inputs against the target model to induce misclassifications.
- Strick Attack: The attacker lacks access to the training dataset and is unable to dynamically alter the input request to monitor the model’s response. If the attacker attempts to request valid input samples and introduces slight perturbations to observe the output label, this activity most probably will be flagged by the target ML model as a malicious attack. Hence, the attacker is constrained to perform a restricted number of closely observed queries, presuming that the target ML system will only detect the malicious attacks after a specific number of attempts.
- Deployment phase: Adversarial attacks during the deployment or production phase represent the most realistic scenario where the attacker’s knowledge of the target model is limited to its outputs, which correspond to a black-box scenario. Hence, the attack’s success during deployment time relies on two main capabilities, the presumption of transferability or the feedback to inquiries. Consequently, the attacker’s capability during the deployment phase can be categorized into two distinct groups, namely transfer-based attack and query-based attack.
- Transfer-based Attack: The fundamental concept underlying transfer-based attack revolves around the creation of adversarial examples on local surrogate models in such a way that these adversarial examples can effectively deceive the remote target model as well. The transferability propriety encompasses two types: task-specific transferability which applies to scenarios where both the remote victim model and the local model are concerned with the same task, for instance, classification. Cross-task transferability arises when the remote victim model and the local model are engaged in diverse tasks, such as classification and detection.
- Query-based Attack: The core idea behind query-based attacks lies in the direct querying of the target model and leveraging the outputs to optimize adversarial samples. To do this, the attacker queries the target model’s output by providing inputs and observing the corresponding results, which can take the form of class labels or score values. Consequently, query-based attacks can be further categorized into two distinct types: decision-based and score-based.
3.4. Attacker’s Strategy
- Attack effectiveness: It can be elaborated by the way to inject a bias in the input data to maximize the efficiency of the attack. In other words, it is nothing more than an optimization problem aimed at maximizing the loss function of the target ML algorithm on a validation dataset or to minimize its loss function on a poisoned dataset.
- Attack frequency: Refers to the decision between a one-time attack and an iterative process that updates the attack multiple times to enhance its optimization. While iterative attacks often outperform their one-time counterparts, they come with the trade-off of increased computational time and the chance of being detected by the ML-based security system. In certain situations, opting for a one-time attack may be adequate or the only practical option available.
4. Adversarial Attack Generation Methods for IoT Networks
4.1. Exploratory Attack Methods
4.1.1. Fast Gradient Sign Method
4.1.2. Basic Iteration Method
4.1.3. Projected Gradient Descent
4.1.4. Limited-Memory BFGS
4.1.5. Jacobian-Based Saliency Map Attack
4.1.6. Carlini and Wagner
4.1.7. DeepFool Attack
4.1.8. Zeroth-Order Optimization
4.1.9. One-Pixel Attack
4.2. Causative Attack Methods
4.2.1. Gradient Ascent
4.2.2. Label Flip** Attack
4.2.3. Generative Adversarial Networks
4.3. Inference Attack Methods
5. Adversarial Defense Methods in IoT Networks
5.1. Network Optimization
5.1.1. Defense Distillation
5.1.2. Gradient Masking
5.1.3. Gradient Regularization
5.2. Data Optimization
5.2.1. Adversarial Training
5.2.2. Feature Squeezing
5.2.3. Input Reconstruction
5.3. External Model Addition
5.3.1. Integrated Defense
5.3.2. Adversarial Example Detection
6. Research Works in ML-Based Security Systems of IoT Networks
Ref. | Year | Network | Security System(s) | Target Model (s) | Dataset(s) | Adversarial Attack Methods | Threat Model(s) | Threat Scenario | Adversarial Defense Techniques | |
---|---|---|---|---|---|---|---|---|---|---|
ML | DL | |||||||||
[123] | 2019 | IoT | IDS | FNN, SNN | Bot-IoT | FGSM, PGD, BIM |
|
|
| |
[126] | 2020 | IoT | IDS | SVM | Gaussian Distributions | Gaussian Distributions |
|
| ✗ | |
[127] | 2021 | IoT | IDS | SVM | ANNs | Bot-IoT | LFA, FGSM |
|
| ✗ |
[128] | 2021 | IoT | IDS | Kitsune | Kitsune (Mirai) | Saliency Maps, iFGSM |
|
| ✗ | |
[129] | 2021 | IoT | IDS | CNN, LSTM, GRU | CSE-CIC-IDS2018 | FGSM |
|
|
| |
[130] | 2021 | IoT | IDS | SVM, DT, RF | MLP | UNSW-NB15, Bot-IoT | JSMA, FGSM, W&C |
|
| ✗ |
[131] | 2021 | IoT | IDS | 48 DT, RF, BN, SVM | Smart Home Testbed | Rule-Based Approach |
|
|
| |
[132] | 2021 | IIoT | IDS | DNNs | CIFAR-10, GTSRB | One-Pixel |
|
|
| |
[115] | 2022 | IoT | IDS | CNN-LSTM | Bot-IoT | C-GAN |
|
|
| |
[113] | 2022 | IIoT | IDS | DRL | DS2OS | GAN |
|
|
| |
[133] | 2022 | IoT | IDS | DT | FGMD, LSTM, RNN | MedBIoT, IoTID | Rule-Based Approach |
|
| ✗ |
[134] | 2022 | IoT | IDS | GCN, JK-Net | UNSW-SOSR2019 | HAA |
|
| ✗ | |
[135] | 2022 | IoT | IDS | DNNs | CIFAR-10, CIFAR-100 | NGA |
|
|
| |
[136] | 2021 | IoT | DIS | RF, DT, K-NN | NN | UNSW IoT Trace | IoTGAN |
|
|
|
[137] | 2021 | IoT | DIS | CVNN | Generated Device Dataset | FGSM, BIM, PGD, MIM |
|
| ✗ | |
[138] | 2022 | IoT | DIS | GAP | FCN, CNNs | IoT-Trace | CAM, Grad-CAM++ |
|
| ✗ |
[139] | 2022 | IoT | DIS | LSTM-CNN | LwHBench | FGSM, BIM, MIM, PGD, JSMA, C&W, Boundary Attack |
|
|
| |
[140] | 2019 | IoT | MDS | CFG-CNN | CFG dataset | GEA |
|
| ✗ | |
[141] | 2020 | IoT | MDS | CNN | Drebin, Contagio, Genome | SC-LFA |
|
|
| |
[112] | 2023 | IoT | MDS | GNNs | CMaldroid, Drebin | VGAE-MalGAN |
|
|
|
7. Challenges
7.1. Dataset
- Under-sampling: Here, entries from the over-represented class are eliminated to equalize the distribution between the minority classes and majority classes. However, if the original dataset is limited, this approach can result in overfitting.
- Over-sampling: In this technique, we replicate entries from the lesser-represented class until its count matches the dominant class. A limitation is that since the minority class has few unique data points, the model might end up memorizing these patterns, leading to overfitting.
- Synthetic Data Generation: This method uses Generative Adversarial Networks (GANs) to mimic the real data’s distribution and create authentic-seeming samples.
7.2. Adversarial Attacks
7.3. Adversarial Defenses
8. Conclusions and Future Works
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Global IoT and Non-IoT Connections 2010–2025. Available online: https://www.statista.com/statistics/1101442/iot-number-of-connected-devices-worldwide/ (accessed on 10 December 2023).
- Khanna, A.; Kaur, S. Internet of Things (IoT), Applications and Challenges: A Comprehensive Review. Wirel. Pers Commun 2020, 114, 1687–1762. [Google Scholar] [CrossRef]
- Riahi Sfar, A.; Natalizio, E.; Challal, Y.; Chtourou, Z. A Roadmap for Security Challenges in the Internet of Things. Digit. Commun. Netw. 2018, 4, 118–137. [Google Scholar] [CrossRef]
- Chaabouni, N.; Mosbah, M.; Zemmari, A.; Sauvignac, C.; Faruki, P. Network Intrusion Detection for IoT Security Based on Learning Techniques. IEEE Commun. Surv. Tutor. 2019, 21, 2671–2701. [Google Scholar] [CrossRef]
- Namanya, A.P.; Cullen, A.; Awan, I.U.; Disso, J.P. The World of Malware: An Overview. In Proceedings of the 2018 IEEE 6th International Conference on Future Internet of Things and Cloud (FiCloud), Barcelona, Spain, 6–8 August 2018; pp. 420–427. [Google Scholar]
- Liu, Y.; Wang, J.; Li, J.; Niu, S.; Song, H. Machine Learning for the Detection and Identification of Internet of Things Devices: A Survey. IEEE Internet Things J. 2022, 9, 298–320. [Google Scholar] [CrossRef]
- Benazzouza, S.; Ridouani, M.; Salahdine, F.; Hayar, A. A Novel Prediction Model for Malicious Users Detection and Spectrum Sensing Based on Stacking and Deep Learning. Sensors 2022, 22, 6477. [Google Scholar] [CrossRef] [PubMed]
- Ridouani, M.; Benazzouza, S.; Salahdine, F.; Hayar, A. A Novel Secure Cooperative Cognitive Radio Network Based on Chebyshev Map. Digit. Signal Process. 2022, 126, 103482. [Google Scholar] [CrossRef]
- Benazzouza, S.; Ridouani, M.; Salahdine, F.; Hayar, A. Chaotic Compressive Spectrum Sensing Based on Chebyshev Map for Cognitive Radio Networks. Symmetry 2021, 13, 429. [Google Scholar] [CrossRef]
- Jordan, M.I.; Mitchell, T.M. Machine Learning: Trends, Perspectives, and Prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
- Talaei Khoei, T.; Kaabouch, N. Machine Learning: Models, Challenges, and Research Directions. Future Internet 2023, 15, 332. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Talaei Khoei, T.; Ould Slimane, H.; Kaabouch, N. Deep Learning: Systematic Review, Models, Challenges, and Research Directions. Neural Comput. Appl. 2023, 35, 23103–23124. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing Properties of Neural Networks. ar** Poisoning Attacks. In ECML PKDD 2018 Workshops; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2019; Volume 11329, pp. 5–15. ISBN 978-3-030-13452-5. [Google Scholar]
- Shahid, A.R.; Imteaj, A.; Wu, P.Y.; Igoche, D.A.; Alam, T. Label Flip** Data Poisoning Attack Against Wearable Human Activity Recognition System. In Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence (SSCI), Singapore, 4 December 2022; pp. 908–914. [Google Scholar]
- Abusnaina, A.; Wu, Y.; Arora, S.; Wang, Y.; Wang, F.; Yang, H.; Mohaisen, D. Adversarial Example Detection Using Latent Neighborhood Graph. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 7667–7676. [Google Scholar]
- Ibitoye, O.; Shafiq, O.; Matrawy, A. Analyzing Adversarial Attacks against Deep Learning for Intrusion Detection in IoT Networks. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar]
- Klambauer, G.; Unterthiner, T.; Mayr, A.; Hochreiter, S. Self-Normalizing Neural Networks. ar** Attacks on Malware Detection Systems. Neural Comput. Appl. 2020, 32, 14781–14800. [Google Scholar] [CrossRef]
- Understanding the Mirai Botnet; USENIX Association, Ed. 2017. Available online: https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-antonakakis.pdf (accessed on 13 November 2023).
- Sharafaldin, I.; Habibi Lashkari, A.; Ghorbani, A.A. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. In Proceedings of the 4th International Conference on Information Systems Security and Privacy, Madeira, Portugal, 22–24 January 2018; pp. 108–116. [Google Scholar]
- Anthi, E.; Williams, L.; Slowinska, M.; Theodorakopoulos, G.; Burnap, P. A Supervised Intrusion Detection System for Smart Home IoT Devices. IEEE Internet Things J. 2019, 6, 9042–9053. [Google Scholar] [CrossRef]
- Weka 3—Data Mining with Open Source Machine Learning Software in Java. Available online: https://www.cs.waikato.ac.nz/ml/weka/ (accessed on 28 October 2023).
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A. CIFAR-10 and CIFAR-100 Datasets. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 28 October 2023).
- Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. Man vs. Computer: Benchmarking Machine Learning Algorithms for Traffic Sign Recognition. Neural Netw. 2012, 32, 323–332. [Google Scholar] [CrossRef] [PubMed]
- DS2OS Traffic Traces. Available online: https://www.kaggle.com/datasets/francoisxa/ds2ostraffictraces (accessed on 28 October 2023).
- Guerra-Manzanares, A.; Medina-Galindo, J.; Bahsi, H.; Nõmm, S. MedBIoT: Generation of an IoT Botnet Dataset in a Medium-Sized IoT Network. In Proceedings of the 6th International Conference on Information Systems Security and Privacy, Valletta, Malta, 25–27 February 2020; pp. 207–218. [Google Scholar]
- Kang, H.; Ahn, D.H.; Lee, G.M.; Yoo, J.D.; Park, K.H.; Kim, H.K. IoT Network Intrusion Dataset. IEEE Dataport. 2019. Available online: https://ieee-dataport.org/open-access/iot-network-intrusion-dataset (accessed on 28 October 2023).
- Hamza, A.; Gharakheili, H.H.; Benson, T.A.; Sivaraman, V. Detecting Volumetric Attacks on loT Devices via SDN-Based Monitoring of MUD Activity. In Proceedings of the 2019 ACM Symposium on SDN Research, San Jose, CA, USA, 3 April 2019; pp. 36–48. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. ar** Knowledge Networks. ar**+Knowledge+Networks&author=Xu,+K.&author=Li,+C.&author=Tian,+Y.&author=Sonobe,+T.&author=Kawarabayashi,+K.&author=Jegelka,+S.&publication_year=2018&journal=ar**v&doi=10.48550/ARXIV.1806.03536" class='google-scholar' target='_blank' rel='noopener noreferrer'>Google Scholar] [CrossRef]
- Zhou, X.; Liang, W.; Wang, K.I.-K.; Huang, R.; **, Q. Academic Influence Aware and Multidimensional Network Analysis for Research Collaboration Navigation Based on Scholarly Big Data. IEEE Trans. Emerg. Top. Comput. 2021, 9, 246–257. [Google Scholar] [CrossRef]
- Sun, Z.; Ambrosi, E.; Pedretti, G.; Bricalli, A.; Ielmini, D. In-Memory PageRank Accelerator with a Cross-Point Array of Resistive Memories. IEEE Trans. Electron. Devices 2020, 67, 1466–1470. [Google Scholar] [CrossRef]
- Ma, J.; Ding, S.; Mei, Q. Towards More Practical Adversarial Attacks on Graph Neural Networks. ar**v 2020, ar**v:2006.05057. [Google Scholar] [CrossRef]
- Wong, E.; Rice, L.; Kolter, J.Z. Fast Is Better than Free: Revisiting Adversarial Training. ar**v 2020, ar**v:2001.03994. [Google Scholar] [CrossRef]
- Bao, J.; Hamdaoui, B.; Wong, W.-K. IoT Device Type Identification Using Hybrid Deep Learning Approach for Increased IoT Security. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020; pp. 565–570. [Google Scholar]
- Sivanathan, A.; Gharakheili, H.H.; Loi, F.; Radford, A.; Wijenayake, C.; Vishwanath, A.; Sivaraman, V. Classifying IoT Devices in Smart Environments Using Network Traffic Characteristics. IEEE Trans. Mob. Comput. 2019, 18, 1745–1759. [Google Scholar] [CrossRef]
- Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.F.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C.J. Deep Complex Networks. ar**v 2017, ar**v:1705.09792. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. ar**v 2020, ar**v:2010.11929. [Google Scholar] [CrossRef]
- Sánchez Sánchez, P.M.; Jorquera Valero, J.M.; Huertas Celdrán, A.; Bovet, G.; Gil Pérez, M.; Martínez Pérez, G. LwHBench: A Low-Level Hardware Component Benchmark and Dataset for Single Board Computers. Internet Things 2023, 22, 100764. [Google Scholar] [CrossRef]
- De Keersmaeker, F.; Cao, Y.; Ndonda, G.K.; Sadre, R. A Survey of Public IoT Datasets for Network Security Research. IEEE Commun. Surv. Tutor. 2023, 25, 1808–1840. [Google Scholar] [CrossRef]
- Kaur, B.; Dadkhah, S.; Shoeleh, F.; Neto, E.C.P.; **ong, P.; Iqbal, S.; Lamontagne, P.; Ray, S.; Ghorbani, A.A. Internet of Things (IoT) Security Dataset Evolution: Challenges and Future Directions. Internet Things 2023, 22, 100780. [Google Scholar] [CrossRef]
- Alex, C.; Creado, G.; Almobaideen, W.; Alghanam, O.A.; Saadeh, M. A Comprehensive Survey for IoT Security Datasets Taxonomy, Classification and Machine Learning Mechanisms. Comput. Secur. 2023, 132, 103283. [Google Scholar] [CrossRef]
- Ahmad, R.; Alsmadi, I.; Alhamdani, W.; Tawalbeh, L. A Comprehensive Deep Learning Benchmark for IoT IDS. Comput. Secur. 2022, 114, 102588. [Google Scholar] [CrossRef]
Ref. | Year | Network | Major Contribution(s) | Limitation(s) | Attacker’s Knowledge | Security Systems | Adversarial Attack Taxonomy | Adversarial Attack Methods | Adversarial Defense Methods | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
White-Box | Black-Box | IDS | MDS | DIS | ||||||||
[23] | 2022 | Traditional | Robustness evaluation of seven shallow ML-based IDS against adversarial attacks. | IoT network security is just mentioned in four references with no discussion. Only three adversarial defense techniques were mentioned. | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ |
[24] | 2019 | Traditional | Evaluation of different adversarial attacks to ML models applied in computer and traditional network security. Classification of adversarial attacks based on security applications. Risk identification using adversarial risk grid map. | Mainly focused on traditional network security while IoT network security was very briefly discussed in a very short paragraph. | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ |
[28] | 2021 | Traditional | Summarize recent research on black-box adversarial attacks against NIDS. | Focused on black-box attacks only. Most popular adversarial attack methods and defense methods were not discussed | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
[30] | 2022 | IoT | Taxonomy of adversarial attacks from insider (internal) perspective. Real-life applications of adversarial insider threats. | Focused on insider (white-box) adversarial attacks only. Model Extraction attacks were not covered as the survey is limited to insider adversarial threats where the adversary has full knowledge of the ML model | ✓ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ |
[31] | 2018 | IoT | Reviewed the existing IDSs used for securing IoT-based smart environments such as Network Intrusion Detection Systems (NIDS) and Hybrid Intrusion Detection Systems (HIDS). | The vulnerability of ML-based IDSs to adversarial attacks was not covered. | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
[32] | 2022 | IoT | Overview of existing ML-based attacks in IoT network. Classification of ML-based attacks based on the type of the used ML algorithm. | Adversarial attacks were briefly discussed as one type of various ML-based attacks in IoT networks. The authors mentioned some adversarial attacks and defense methods with no discussion. | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
[33] | 2020 | CPS | Surveyed adversarial threats within the context of Cyber-Physical Systems (CPS). | Considered only adversarial attacks that exploit sensors in IoT and CPS devices. Limited to sensor-based threats only | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
[35] | 2022 | Traditional | Adversarial attacks on malware detection systems Adversarial malware evasion threat modeling. | They were focused on the computer and cybersecurity domain, while the IoT network security domain was overlooked. | ✓ | ✓ | ✗ | ✓ | ✗ | ✓ | ✓ | ✗ |
[36] | 2023 | Traditional | Highlighting various types of adversarial attacks against IDS in the context of traditional networks. | IoT network security context was not included. Model Extraction attacks were not covered. | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ |
[34] | 2023 | Traditional | Explored the disparity in adversarial learning within the fields of Network Intrusion Detection Systems (NIDS) and Computer Vision specifically focusing on DL-based NIDS in traditional network. | Mainly focused on traditional network security while IoT network security was very little discussed. Poisoning and model extraction attacks are not covered. | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ |
Our Work | 2023 | IoT | Holistic review of ML adversarial attacks in three prominent IoT security systems: IDSs, MDSs, and DISs. Re-defining taxonomy of threat methods in IoT context. 2D classification of both adversarial attacks and defense methods. | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khazane, H.; Ridouani, M.; Salahdine, F.; Kaabouch, N. A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks. Future Internet 2024, 16, 32. https://doi.org/10.3390/fi16010032
Khazane H, Ridouani M, Salahdine F, Kaabouch N. A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks. Future Internet. 2024; 16(1):32. https://doi.org/10.3390/fi16010032
Chicago/Turabian StyleKhazane, Hassan, Mohammed Ridouani, Fatima Salahdine, and Naima Kaabouch. 2024. "A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks" Future Internet 16, no. 1: 32. https://doi.org/10.3390/fi16010032