SURVEY OF ADVERSARIAL ATTACKS AND DEFENSE AGAINST ADVERSARIAL ATTACKS
DOI:
https://doi.org/10.36676/dira.v12.i3.105Keywords:
Adversarial attacks, GAN(Generative adversarial networks), FGSM - FastGradient Sign methodAbstract
In recent years, the fields of Artificial Intelligence (AI) and Deep learning (DL) techniques along with Neural Networks (NNs) have shown great progress and scope for future research. Along with all the developments comes the threats and security vulnerabilities to Neural Networks and AI models. A few fabricated inputs/samples can lead to deviations in the results of the models. Patch based Adversarial Attacks can change the output of a neural network to a completely different result just by making a few changes to the input of the neural network. These attacks employ a patch that is applied to the input image in order to cause the classifier to misclassify and make the incorrect prediction. The goal of this research is to develop effective defense strategies against these types of attacks and make the model/Neural Network more robust.
References
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Good- fellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013).
Adversarial Patch - Tom B. Brown, Dandelion Mane´ , Aurko Roy, Mart´ın Abadi, Justin Gilmer [3]Fooling automated surveillance cameras: adversarial patches to attack person detection -Simen
Thys, Wiebe Van Ranst, Toon Goedeme
I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song.
Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945, 2017.
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial examples. arXiv preprint arXiv:1706.06083, 2017
Downloads
Published
How to Cite
License
Copyright (c) 2024 Darpan International Research Analysis
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.