Show simple item record

dc.contributor.advisorBhilare, Shruti
dc.contributor.advisorHati, Avik
dc.contributor.authorGajjar, Shivangi Bharatbhai
dc.date.accessioned2024-08-22T05:21:01Z
dc.date.available2024-08-22T05:21:01Z
dc.date.issued2022
dc.identifier.citationGajjar, Shivangi Bharatbhai (2022). Generating Targeted Adversarial Attacks and Assessing their Effectiveness in Fooling Deep Neural Networks. Dhirubhai Ambani Institute of Information and Communication Technology. viii, 37 p. (Acc. # T01016).
dc.identifier.urihttp://drsr.daiict.ac.in//handle/123456789/1096
dc.description.abstractDeep neural network (DNN) models have gained popularity for most image classification problems. However, DNNs also have numerous vulnerable areas. These vulnerabilities can be exploited by an adversary to execute a successful adversarial attack, which is an algorithm to generate perturbed inputs that can fool a welltrained DNN. Among various existing adversarial attacks, DeepFool, a whitebox untargeted attack is considered as one of the most reliable algorithms to compute adversarial perturbations. However, in some scenarios such as person recognition, adversary might want to carry out a targeted attack such that the input gets misclassified in a specific target class. Moreover, studies show that defense against a targeted attack is tougher than an untargeted one. Hence, generating a targeted adversarial example is desirable from an attacker�s perspective. In this thesis, we propose �Targeted DeepFool�, which is based on computing a minimal amount of perturbation required to reach the target hyperplane. The proposed algorithm produces minimal amount of distortion for conventional image datasets: MNIST and CIFAR10. Further, Targeted DeepFool shows excellent performance in terms of adversarial success rate.
dc.publisherDhirubhai Ambani Institute of Information and Communication Technology
dc.subjectDeep Neural Network
dc.subjectalgorithm
dc.subjectTargeted DeepFool
dc.classification.ddc006.3 GAJ
dc.titleGenerating Targeted Adversarial Attacks and Assessing their Effectiveness in Fooling Deep Neural Networks
dc.typeDissertation
dc.degreeM. Tech
dc.student.id202011023
dc.accession.numberT01016


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record