dc.contributor.advisor | Bhilare, Shruti | |
dc.contributor.advisor | Hati, Avik | |
dc.contributor.author | Gajjar, Shivangi Bharatbhai | |
dc.date.accessioned | 2024-08-22T05:21:01Z | |
dc.date.available | 2024-08-22T05:21:01Z | |
dc.date.issued | 2022 | |
dc.identifier.citation | Gajjar, Shivangi Bharatbhai (2022). Generating Targeted Adversarial Attacks and Assessing their Effectiveness in Fooling Deep Neural Networks. Dhirubhai Ambani Institute of Information and Communication Technology. viii, 37 p. (Acc. # T01016). | |
dc.identifier.uri | http://drsr.daiict.ac.in//handle/123456789/1096 | |
dc.description.abstract | Deep neural network (DNN) models have gained popularity for most image classification problems. However, DNNs also have numerous vulnerable areas. These vulnerabilities can be exploited by an adversary to execute a successful adversarial attack, which is an algorithm to generate perturbed inputs that can fool a welltrained DNN. Among various existing adversarial attacks, DeepFool, a whitebox untargeted attack is considered as one of the most reliable algorithms to compute adversarial perturbations. However, in some scenarios such as person recognition, adversary might want to carry out a targeted attack such that the input gets misclassified in a specific target class. Moreover, studies show that defense against a targeted attack is tougher than an untargeted one. Hence, generating a targeted adversarial example is desirable from an attacker�s perspective. In this thesis, we propose �Targeted DeepFool�, which is based on computing a minimal amount of perturbation required to reach the target hyperplane. The proposed algorithm produces minimal amount of distortion for conventional image datasets: MNIST and CIFAR10. Further, Targeted DeepFool shows excellent performance in terms of adversarial success rate. | |
dc.publisher | Dhirubhai Ambani Institute of Information and Communication Technology | |
dc.subject | Deep Neural Network | |
dc.subject | algorithm | |
dc.subject | Targeted DeepFool | |
dc.classification.ddc | 006.3 GAJ | |
dc.title | Generating Targeted Adversarial Attacks and Assessing their Effectiveness in Fooling Deep Neural Networks | |
dc.type | Dissertation | |
dc.degree | M. Tech | |
dc.student.id | 202011023 | |
dc.accession.number | T01016 | |