Show simple item record

dc.contributor.advisorRoy, Anil K.
dc.contributor.advisorAnand, Pritam
dc.contributor.authorRatanghayara, Khushali
dc.date.accessioned2024-08-22T05:21:21Z
dc.date.available2024-08-22T05:21:21Z
dc.date.issued2023
dc.identifier.citationRatanghayara, Khushali (2023). Automated Handwritten Answer Sheet Evaluation System Using Deep Learning Methods. Dhirubhai Ambani Institute of Information and Communication Technology. ix, 472 p. (Acc. # T01128).
dc.identifier.urihttp://drsr.daiict.ac.in//handle/123456789/1187
dc.description.abstractAutomation has gained significant prominence in various technological domains,offering the potential to streamline processes and minimize human error. Eventhough it is believed that education and the process of teaching-learning requirehuman-to-human interaction, automation may have tremendous promises heretoo, particularly in tasks such as student registration, class attendance, administrativeduties, and answer sheets evaluation.This thesis focuses on automating the evaluation of answer sheets submittedby students as a result of a conducted examination. As the scope of the thesis, wehave considered answers to only two types of questions, namely, multiple choicequestions for which one of the four alphabets: a, b, c or d is selected as the correctanswer and objective type questions which have single word answers such as�Yes� or �No�; �correct� or �incorrect�; �true� or �false�. Therefore we need to �read�these 10 answers to mark it a right answer, else it is a wrong answer. The rightanswer will fetch a positive mark and the wrong answer will attract a zero or anegative mark, whatever the case may be.We took a two-pronged approach to achieve automation. At first, we tried the�classification� approach, in which our system was trained to classify the answersin one of these 10 classes, i.e., a, b, c, d, Yes, No, Correct, Incorrect, True, False.We employed an object detection model, YOLO which was capable of classifyinga fixed set of ten classes representing possible answers. This model achievedan impressive accuracy rate of 93% and demonstrated the potential to automatethe evaluation of multiple-choice and one-word answer-type questions. Our systemworked fine until it found an answer which was of a new class, for example,�right�. The system tries to read it as one of the 10 classes and that is wrong. It reducesthe efficiency of this automation system. To experiment with this approachwe created our own handwritten dataset of these 10 classes. It has over 24200data.In order to address the limitations of the first approach, we attempted thisproblem as a recognition problem. It leveraged text recognition models combiningconvolutional neural networks (CNN) and recurrent neural networks (RNNs).YOLO was used to recognize each of the 26 alphabets of English script. Then 5deep learning models, such as CNN, CNN + RNN, CNN + LSTM, CNN + BidirectionalLSTM, and CNN + Bidirectional GRU were used to read the word. Thesemodels were capable of recognizing all words written in the answer sheets. Wethen proceeded to match the recognized words with a fixed set of answers. However,in cases where a match was not found, we considered those responses forfurther evaluation. Through a comparison of the outputs from each model, weachieved an impressive accuracy of 91%. This outcome underscores the effectivenessof employing diverse methods to automate the evaluation of answer sheets.In order to achieve it, we used a benchmark IAM word dataset [17]. Thisdataset was used to train these deep-learning models for effective automation.Then we combine the self-generated dataset of handwritten samples in this approachto test the automatic answer sheet evaluation system.In future, the remaining two types of questions, e.g., short answer type andlong answer type, may also be included in this answer sheet evaluation automationsystem. This will require NLP based approach to evaluate the answers in acontextual approach.
dc.publisherDhirubhai Ambani Institute of Information and Communication Technology
dc.subjectDeep Learning
dc.subjectEvaluation System
dc.subjectExam evalution
dc.subjectObject detection model
dc.classification.ddc006.31 RAT
dc.titleAutomated Handwritten Answer Sheet Evaluation System Using Deep Learning Methods
dc.typeDissertation
dc.degreeM. Tech
dc.student.id202111049
dc.accession.numberT01128


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record