Adversarial Defense Using Partial Pseudorandom Encryption
MetadataShow full item record
Machine Learning models like Deep neural networks are vulnerable to adversarial attacks. Carefully crafted adversarial examples force a learned classifier to misclassify the input which can be correctly classified by a human observer. In this thesis, we present a novel approach for defense against such Adversarial attacks. We train and test the model on transformed images in black-box and gray-box scenarios. Here, we propose a transformation technique that partially encrypts every image before training and testing using the Rivest–Shamir–Adleman (RSA) , an asymmetric-key encryption algorithm for visual encryption. The internal structure of the system and the keys generated by RSA are secret. We encrypt only those pixels which are generated by a pseudorandom number generator with a pre-decided secret seed. The images encrypted with such transformation are extremely difficult to decrypt and to launch adaptive adversarial attacks or transferability attacks which makes this visual defense technique against adversarial attack robust. As the field of Adversarial machine learning (AML) is still under study, researchers have not attempted such an approach of training the model on encrypted images for robust learning. State-of-the-art defense techniques are effective but they are computationally expensive and still will not guarantee total security. This idea of partial encryption maintains features and asymmetric key encryption makes it difficult for adversary to guess encryption parameters. This makes the technique novel and hence out-performs state-of-the-art defense techniques.
- M Tech Dissertations