Show simple item record

dc.contributor.advisorSingh, Priyanka
dc.contributor.advisorJoshi, Manjunath V.
dc.contributor.authorYagnik, Shrey Devenkumar
dc.date.accessioned2024-08-22T05:21:26Z
dc.date.available2024-08-22T05:21:26Z
dc.date.issued2023
dc.identifier.citationYagnik, Shrey Devenkumar (2023). On the Robustness of Federated Learning towards Various Attacks. Dhirubhai Ambani Institute of Information and Communication Technology. vii, 33 p. (Acc. # T01143).
dc.identifier.urihttp://drsr.daiict.ac.in//handle/123456789/1202
dc.description.abstractA study based on Federated Learning (FL), i.e., a kind of decentralized learningthat consists of local training among the clients, and the central server returnsthe federated average. Deep learning models have been used in numeroussecurity-critical settings since they have performed well on various tasks. Here,we study different kinds of attacks on FL. FL has become a popular distributedtraining method because it enables users to work with large datasets without sharingthem. Once the model has been trained using data on local devices, only theupdated model parameters are sent to the central server. The FL approach is distributed.Thus, someone could launch an attack to influence the model�s behavior.In this work, we conducted the study for a Backdoor attack, a black-box attackwhere we added a few poisonous instances to check the model�s behavior duringtest time. Also, we conducted three types of White-Box attacks, i.e., Fast GradientSign Method (FGSM), Carlini-Wagner (CW), and DeepFool. We conductedvarious experiments using the standard CIFAR10 dataset to alter the model�s behavior.We used ResNet20 and DenseNet as the Deep Neural Networks. Wefound some adversarial samples upon which the required perturbation is addedto fool the model upon giving the misclassifications. This decentralized approachto training can make it more difficult for attackers to access the training data, butit can also introduce new vulnerabilities that attackers can exploit. We found outthat the expected behavior of the model could be compromised without havingmuch difference in the training accuracy.
dc.publisherDhirubhai Ambani Institute of Information and Communication Technology
dc.subjectFederated Learning
dc.subjectDeep Learning
dc.subjectWhite-Box attacks
dc.subjectFast Gradient Sign Method (FGSM)
dc.subjectCarlini-Wagner
dc.subjectCW
dc.subjectDeepFool
dc.classification.ddc006.31 YAG
dc.titleOn the Robustness of Federated Learning towards Various Attacks
dc.typeDissertation
dc.degreeM. Tech
dc.student.id202111072
dc.accession.numberT01143


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record