Please use this identifier to cite or link to this item: http://drsr.daiict.ac.in//handle/123456789/1122
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorMajumder, Prasenjit-
dc.contributor.authorRanpara, Tarang J.-
dc.date.accessioned2024-08-22T05:21:05Z-
dc.date.available2024-08-22T05:21:05Z-
dc.date.issued2022-
dc.identifier.citationRanpara, Tarang J. (2022). Finding Proxy For Human Evaluation Re-evaluating the evaluation of news summarization. Dhirubhai Ambani Institute of Information and Communication Technology. xi, 51 p. (Acc. # T01042).-
dc.identifier.urihttp://drsr.daiict.ac.in//handle/123456789/1122-
dc.description.abstractEngaging human annotators to evaluate every summary in a content summarization system is not feasible. Automatic evaluation metrics act as a proxy for human evaluation. A high correlation with human evaluation determines the effectiveness of a given metric. This thesis compares 40 different evaluation metrics with human judgments in terms of correlation and investigates whether the contextual similarity based metrics are better than lexical overlap based metrics, i.e., ROUGE score. The comparison shows that contextual similarity based metrics have a high correlation with human judgments than lexical overlap based metrics. Thus, such metrics can act as a good proxy for human judgment.-
dc.publisherDhirubhai Ambani Institute of Information and Communication Technology-
dc.subjectNews Summarization-
dc.subjectEvaluation-
dc.subjectLexical overlap-
dc.subjectContextual Similarity-
dc.subjectROUGE-
dc.subjectTransformers-
dc.subjectUord2vec-
dc.classification.ddc523.1 RAN-
dc.titleFinding Proxy For Human Evaluation Re-evaluating the evaluation of news summarization-
dc.typeDissertation-
dc.degreeM. Tech-
dc.student.id202011057-
dc.accession.numberT01042-
Appears in Collections:M Tech Dissertations

Files in This Item:
File SizeFormat 
202011057.pdf1.32 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.