Please use this identifier to cite or link to this item: http://drsr.daiict.ac.in//handle/123456789/970
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorMajumder, Prasenjit
dc.contributor.authorTyagi, Akansha
dc.date.accessioned2020-09-22T13:48:37Z
dc.date.available2023-02-17T13:48:37Z
dc.date.issued2020
dc.identifier.citationTyagi, Akansha (2020). What does BERT learn about questions. Dhirubhai Ambani Institute of Information and Communication Technology. vii, 40 p. (Acc.No: T00888)
dc.identifier.urihttp://drsr.daiict.ac.in//handle/123456789/970
dc.description.abstractRecent research in Question Answering is highly motivated by the introduction of the BERT [5] model. This model has gained considerable attention since the researcher of Google AI Language has claimed state-of-the-art results over various NLP tasks, including QA. On one side, where the introduction of end-to-end pipeline models consisting of an IR and an RC model has opened the scope of research in two different areas, new BERT representations alone show a significant improvement in the performance of a QA system. In this study, we have covered several pipeline models like R3: Reinforced Ranker-Reader [15], Re-Ranker Model [16], and Interactive Retriever-Reader Model [4] along with the transformer-based QA system i.e., BERT. The motivation of this work is to deeply understand the black-box BERT model and try to identify the BERT’s learning about the question to predict the correct answer for it from a given context. We will discuss all the experiments that we have performed to understand BERT’s behavior from a different perspective. We have performed all the experiments using the SQuAD dataset. We have also used the LRP [3] technique to get a better understanding and for a better analysis of the experiment results. Along with the study about what the model learns, we have also tried to find what the model does not learn. For this, we have analyzed various examples from the dataset to determine the types of questions for whom the model predicts an incorrect answer. Finally, we have presented the overall findings of the BERT model in the conclusion section.
dc.subjectBERT
dc.subjectLanguage model
dc.subjectMachine Learning Technique
dc.subjectNatural Language Processing
dc.classification.ddc006.32 TYA
dc.titleWhat does BERT learn about questions
dc.typeDissertation
dc.degreeM. Tech
dc.student.id201811063
dc.accession.numberT00888
Appears in Collections:M Tech Dissertations

Files in This Item:
File Description SizeFormat 
201811063.pdf
  Restricted Access
1.19 MBAdobe PDFView/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.