Deep Model Poisoning Attack on Federated Learning
Abstract
:1. Introduction
- By analyzing model capacity, we propose an optimization-based model poisoning attack and inject adversarial neurons in the redundant space of a neural network. It should be noted that those redundant neurons are important for poisoning attack, while they have less correlation to the main task of federated learning. Therefore, the proposed model poisoning attack will not degrade the performance of main task on the shared global model.
- We generalize two defenses that are used in collaborative learning system to defend against local model poisoning attacks. The numerical experiments demonstrate that the proposed method can bypass defense methods and achieve a high attack success rate.
2. Background and Related Works
2.1. Machine Learning
2.2. Collaborative Learning
2.3. Attacks against Machine Learning
- Evasion Attack: this is the most common type of attack in the adversarial setting. The adversary tries to evade the system by adjusting malicious samples during testing phase. This setting does not assume any influence over the training data.
- Poisoning Attack: this type of attack, which is known as contamination of the training data, takes place during the training time of the machine learning model. An adversary tries to poison the training data by injecting carefully designed samples to compromise the whole learning process eventually.
- Exploratory Attack: these attacks do not influence training dataset. Given black box access to the model, they try to gain as much knowledge as possible about the learning algorithm of the underlying system and pattern in training data. The definition of a threat model depends on the information that the adversary has at their disposal.
2.4. Related Works
3. Attack Methodology
3.1. Problem Definition
3.2. Adversary’s Goal
3.3. Adversary’s Capability
3.4. Optimization-Based Model Poisoning Attack
4. Experiments
4.1. Dataset and Experiment Setup
4.2. Effectiveness and Persistence of Attack
4.3. Stealth of Attack
4.4. Discussion and Next Steps
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Konecný, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated Learning: Strategies for Improving Communication Efficiency. ar**. ar**&author=Cao,+X.&author=Fang,+M.&author=Liu,+J.&author=Gong,+N.Z.&publication_year=2020&journal=ar**v" class='google-scholar' target='_blank' rel='noopener noreferrer'>Google Scholar]
- Cao, X.; Jia, J.; Gong, N.Z. Provably Secure Federated Learning against Malicious Clients. ar**v 2021, ar**v:2102.01854. [Google Scholar]
Attack Category | Methods | Persistence | Stealth | Scenario |
---|---|---|---|---|
Data poisoning | [10] | | | Machine learning |
[11] | | | Linear regression | |
[31] | | | Clean-label attack | |
[32] | | | Recommender system | |
Model poisoning | [4] | | | Federated learning |
[8] | | | Federated learning | |
Proposed | | | Federated learning |
Dataset | Classes | Features | Net | /N | IE | η1/η | |
---|---|---|---|---|---|---|---|
MNIST | 10 | 784 | LetNet | 10/20 | 5 | 0.05/0.04 | 100 |
CIFAR-10 | 10 | 1024 | ResNet | 10/50 | 10 | 0.1/0.5 | 100 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, X.; Xu, M.; Wu, Y.; Zheng, N. Deep Model Poisoning Attack on Federated Learning. Future Internet 2021, 13, 73. https://doi.org/10.3390/fi13030073
Zhou X, Xu M, Wu Y, Zheng N. Deep Model Poisoning Attack on Federated Learning. Future Internet. 2021; 13(3):73. https://doi.org/10.3390/fi13030073
Chicago/Turabian StyleZhou, **ngchen, Ming Xu, Yiming Wu, and Ning Zheng. 2021. "Deep Model Poisoning Attack on Federated Learning" Future Internet 13, no. 3: 73. https://doi.org/10.3390/fi13030073