• Login
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    View Usage StatisticsView Google Analytics Statistics

    Finding Proxy For Human Evaluation Re-evaluating the evaluation of news summarization

    Thumbnail
    View/Open
    202011057.pdf (1.292Mb)
    Date
    2022
    Author
    Ranpara, Tarang J.
    Metadata
    Show full item record
    Abstract
    Engaging human annotators to evaluate every summary in a content summarization system is not feasible. Automatic evaluation metrics act as a proxy for human evaluation. A high correlation with human evaluation determines the effectiveness of a given metric. This thesis compares 40 different evaluation metrics with human judgments in terms of correlation and investigates whether the contextual similarity based metrics are better than lexical overlap based metrics, i.e., ROUGE score. The comparison shows that contextual similarity based metrics have a high correlation with human judgments than lexical overlap based metrics. Thus, such metrics can act as a good proxy for human judgment.
    URI
    http://drsr.daiict.ac.in//handle/123456789/1122
    Collections
    • M Tech Dissertations [923]

    Resource Centre copyright © 2006-2017 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV
     

     


    Resource Centre copyright © 2006-2017 
    Contact Us | Send Feedback
    Theme by 
    Atmire NV