SURVEY OF ADVERSARIAL ATTACKS AND DEFENSE AGAINST ADVERSARIAL ATTACKS

Authors

  • Akshat Jain Electronics and Communication Engineering, IIT Roorkee, Uttarakhand, India
  • Sanskar Agarwal Electronics and Communication Engineering, IIT Roorkee, Uttarakhand, India
  • Armaan Pareek Electronics and Communication Engineering, IIT Roorkee, Uttarakhand, India
  • Vanshika Singh Electronics and Communication Engineering, IIT Roorkee, Uttarakhand, India

DOI:

https://doi.org/10.36676/dira.v12.i3.105

Keywords:

Adversarial attacks, GAN(Generative adversarial networks), FGSM - FastGradient Sign method

Abstract

In recent years, the fields of Artificial Intelligence (AI) and Deep learning (DL) techniques along with Neural Networks (NNs) have shown great progress and scope for future research. Along with all the developments comes the threats and security vulnerabilities to Neural Networks and AI models. A few fabricated inputs/samples can lead to deviations in the results of the models. Patch based Adversarial Attacks can change the output of a neural network to a completely different result just by making a few changes to the input of the neural network. These attacks employ a patch that is applied to the input image in order to cause the classifier to misclassify and make the incorrect prediction. The goal of this research is to develop effective defense strategies against these types of attacks and make the model/Neural Network more robust.

References

Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Good- fellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013).

Adversarial Patch - Tom B. Brown, Dandelion Mane´ , Aurko Roy, Mart´ın Abadi, Justin Gilmer [3]Fooling automated surveillance cameras: adversarial patches to attack person detection -Simen

Thys, Wiebe Van Ranst, Toon Goedeme

I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song.

Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945, 2017.

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial examples. arXiv preprint arXiv:1706.06083, 2017

Downloads

Published

2024-09-18
CITATION
DOI: 10.36676/dira.v12.i3.105
Published: 2024-09-18

How to Cite

Jain, A., Agarwal, S., Pareek, A., & Singh, V. (2024). SURVEY OF ADVERSARIAL ATTACKS AND DEFENSE AGAINST ADVERSARIAL ATTACKS. Darpan International Research Analysis, 12(3), 535–542. https://doi.org/10.36676/dira.v12.i3.105

Issue

Section

Articles

Categories

Most read articles by the same author(s)