Please use this identifier to cite or link to this item: http://drsr.daiict.ac.in//handle/123456789/1170
Title: Investigating Robustness of Face Recognition System against Adversarial Attacks
Authors: Bhilare, Shruti
Sarvaiya, Maulik Karshanbhai
Keywords: Face Recognition
Adversarial Attacks
Security
Deep neural networks
Issue Date: 2023
Publisher: Dhirubhai Ambani Institute of Information and Communication Technology
Citation: Sarvaiya, Maulik Karshanbhai (2023). Investigating Robustness of Face Recognition System against Adversarial Attacks. Dhirubhai Ambani Institute of Information and Communication Technology. ix, 38 p. (Acc. # T01111).
Abstract: Facial Recognition (FR) systems based on deep neural networks (DNNs) are widelyused in critical applications such as surveillance and access control necessitat-ing their reliable working. Recent research has highlighted the vulnerability ofDNNs to adversarial attacks, which involve adding imperceptible perturbationsto the original image. The presence of these adversarial attacks raises seriousconcerns about the security and robustness of deep neural networks. As a re-sult, researchers are actively exploring and developing strategies to strengthenthe DNNs against such threats. Additionally, the object used should look natu-ral and not draw undue attention. Attacks are carried out in white-box targetedas well as untargeted settings on Labeled Faced in Wild (LFW) dataset. Attacksuccess rate of 97.76% and 91.78% are achieved in untargeted and targeted set-tings, respectively demonstrating the high vulnerability of the FR systems to suchattacks. The attacks will be evaluated in the digital domain to optimize the adver-sarial pattern, its size and location on the face.
URI: http://drsr.daiict.ac.in//handle/123456789/1170
Appears in Collections:M Tech Dissertations

Files in This Item:
File SizeFormat 
202111025.pdf6.2 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.