Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter
Abstract
:1. Introduction
- We provide a complete study on neural ODE image classifiers and on how their robustness can vary by playing with the ODE solver tolerance against adversarial attacks such as the Carlini and Wagner one;
- We demonstrate the defensive properties offered by ODE nets in a zero-knowledge adversarial scenario;
- We analyze how the robustness offered by Neural ODE nets varies in the more stringent scenario of an active attacker that changes the attack-time solver tolerance.
2. Related Work
3. Background
3.1. The Neural ODE Networks
3.2. The Carlini and Wagner Attack
4. Robustness via Tolerance Variation
4.1. Defensive Tolerance Randomization
5. Robustness under Adaptive Attackers
6. Experimental Details
6.1. Datasets: MNIST and CIFAR-10
6.2. The Training Phase
6.3. Carlini and Wagner Attack Implementation Details
7. Conclusions and Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.J.; Fergus, R. Intriguing properties of neural networks. ar**s in deep residual networks. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; pp. 630–645. [Google Scholar]
- Wu, Y.; He, K. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- Rauber, J.; Brendel, W.; Bethge, M. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. ar**v 2017, ar**v:1707.04131. [Google Scholar]
MNIST | CIFAR-10 | |||||
---|---|---|---|---|---|---|
Err (%) | ASR (%) | Pert () | Err (%) | ASR (%) | Pert () | |
RES | 0.4 | 99.7 | 1.1 | 7.3 | 100 | 2.6 |
ODE | 0.5 | 99.7 | 1.4 | 9.1 | 100 | 2.2 |
ODE | 0.5 | 90.7 | 1.7 | 9.2 | 100 | 2.4 |
ODE | 0.6 | 74.4 | 1.9 | 9.3 | 100 | 4.1 |
ODE | 0.8 | 71.6 | 1.7 | 10.6 | 100 | 8.0 |
ODE | 1.2 | 69.7 | 1.9 | 11.3 | 100 | 13.7 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Carrara, F.; Caldelli, R.; Falchi, F.; Amato, G. Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter. Information 2022, 13, 555. https://doi.org/10.3390/info13120555
Carrara F, Caldelli R, Falchi F, Amato G. Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter. Information. 2022; 13(12):555. https://doi.org/10.3390/info13120555
Chicago/Turabian StyleCarrara, Fabio, Roberto Caldelli, Fabrizio Falchi, and Giuseppe Amato. 2022. "Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter" Information 13, no. 12: 555. https://doi.org/10.3390/info13120555