Please use this identifier to cite or link to this item:
http://drsr.daiict.ac.in//handle/123456789/1202
Title: | On the Robustness of Federated Learning towards Various Attacks |
Authors: | Singh, Priyanka Joshi, Manjunath V. Yagnik, Shrey Devenkumar |
Keywords: | Federated Learning Deep Learning White-Box attacks Fast Gradient Sign Method (FGSM) Carlini-Wagner CW DeepFool |
Issue Date: | 2023 |
Publisher: | Dhirubhai Ambani Institute of Information and Communication Technology |
Citation: | Yagnik, Shrey Devenkumar (2023). On the Robustness of Federated Learning towards Various Attacks. Dhirubhai Ambani Institute of Information and Communication Technology. vii, 33 p. (Acc. # T01143). |
Abstract: | A study based on Federated Learning (FL), i.e., a kind of decentralized learningthat consists of local training among the clients, and the central server returnsthe federated average. Deep learning models have been used in numeroussecurity-critical settings since they have performed well on various tasks. Here,we study different kinds of attacks on FL. FL has become a popular distributedtraining method because it enables users to work with large datasets without sharingthem. Once the model has been trained using data on local devices, only theupdated model parameters are sent to the central server. The FL approach is distributed.Thus, someone could launch an attack to influence the model�s behavior.In this work, we conducted the study for a Backdoor attack, a black-box attackwhere we added a few poisonous instances to check the model�s behavior duringtest time. Also, we conducted three types of White-Box attacks, i.e., Fast GradientSign Method (FGSM), Carlini-Wagner (CW), and DeepFool. We conductedvarious experiments using the standard CIFAR10 dataset to alter the model�s behavior.We used ResNet20 and DenseNet as the Deep Neural Networks. Wefound some adversarial samples upon which the required perturbation is addedto fool the model upon giving the misclassifications. This decentralized approachto training can make it more difficult for attackers to access the training data, butit can also introduce new vulnerabilities that attackers can exploit. We found outthat the expected behavior of the model could be compromised without havingmuch difference in the training accuracy. |
URI: | http://drsr.daiict.ac.in//handle/123456789/1202 |
Appears in Collections: | M Tech Dissertations |
Files in This Item:
File | Size | Format | |
---|---|---|---|
202111072.pdf | 6.3 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.